In the next years, we must challenge climate change, and the urgency of adopting a more sustainable lifestyle has increased. Conversational Agents, such as Smart home Personal Assistants, have shown promise in fostering sustainable behaviors in domestic environments. However, traditional conversations with rule-based approaches in such agents face challenges in addressing users' questions in complex domains like sustainability. Large Language Models (LLMs) are a promising tool to overcome these limitations of their capability to answer open-domain questions. The final objective of this work is to compare the generative capabilities of four large language models in ecological sustainability to determine the most suitable LLM to be embedded into home assistants and create a hybrid model of conversational agent for environmental sustainability. We performed two evaluations. In the former, we constructed a set of trustable sources on the topic and analyzed the extent to which the themes covered in the text generated by the models appeared in it. The results do not show a statistical difference between the outputs of the candidate models, while qualitative analysis determined that ChatGPT, at the moment, is the optimal solution. In the second evaluation, we tested the responses generated by ChatGPT on a corpus of 167 questions from a sample of 75 people. Responses evaluation was performed by a team of experts (N=5) on fluency, coherency, consistency, accuracy, and reasoning. The results suggest that ChatGPT for generic questions on sustainability is quite reliable.

Assessing LLMs Responses in the Field of Domestic Sustainability: An Exploratory Study

Giudici, Mathyas;Abbo, Giulio Antonio;Belotti, Ottavia;Crovari, Pietro;Garzotto, Franca
2023-01-01

Abstract

In the next years, we must challenge climate change, and the urgency of adopting a more sustainable lifestyle has increased. Conversational Agents, such as Smart home Personal Assistants, have shown promise in fostering sustainable behaviors in domestic environments. However, traditional conversations with rule-based approaches in such agents face challenges in addressing users' questions in complex domains like sustainability. Large Language Models (LLMs) are a promising tool to overcome these limitations of their capability to answer open-domain questions. The final objective of this work is to compare the generative capabilities of four large language models in ecological sustainability to determine the most suitable LLM to be embedded into home assistants and create a hybrid model of conversational agent for environmental sustainability. We performed two evaluations. In the former, we constructed a set of trustable sources on the topic and analyzed the extent to which the themes covered in the text generated by the models appeared in it. The results do not show a statistical difference between the outputs of the candidate models, while qualitative analysis determined that ChatGPT, at the moment, is the optimal solution. In the second evaluation, we tested the responses generated by ChatGPT on a corpus of 167 questions from a sample of 75 people. Responses evaluation was performed by a team of experts (N=5) on fluency, coherency, consistency, accuracy, and reasoning. The results suggest that ChatGPT for generic questions on sustainability is quite reliable.
2023
2023 Third International Conference on Digital Data Processing (DDP)
979-8-3503-2901-8
Climate change;Sustainable development;Biological system modeling;Chatbots;Analytical models;Smart homes;Environmental monitoring;Modeling;Oral communication;Agent-based modeling;Statistical analysis;Statistical distributions;Trust management;Conversational Agent;Sustainability;LLM;Rule-based CA
File in questo prodotto:
File Dimensione Formato  
Assessing_LLMs_Responses_in_the_Field_of_Domestic_Sustainability_An_Exploratory_Study.pdf

Accesso riservato

Dimensione 374.42 kB
Formato Adobe PDF
374.42 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1260479
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact