In the contemporary world, artificial intelligence and machine learning algorithms are an important driver for decision-making, by leveraging real-world data for future predictions. Despite clearly improving efficiency, the lack of transparency in their predictions raises concerns about the degree of fairness of machine learning models, well highlighted by recent instances of algorithmic unfairness, from automated decisions on criminal recidivism to disease prediction. Increased user awareness of algorithmic fairness is met with a deficiency in systems guiding data analysts and practitioners in comprehending the implications of their outputs. To tackle the challenge of fairness interpretability, we propose FairnessFriend, a chatbot solution that combines data science with a human-computer interaction perspective. Given a dataset and a trained machine learning model with established fairness metrics, our system facilitates users in understanding these metrics and their significance in the context of the training data. FairnessFriend provides meanings for various statistical fairness metrics, and presents the resulting metrics values with detailed explanations, offering specific insights into their implications.

Exploring Fairness Interpretability with FairnessFriend: A Chatbot Solution

Criscuolo C.;Dolci T.
2024-01-01

Abstract

In the contemporary world, artificial intelligence and machine learning algorithms are an important driver for decision-making, by leveraging real-world data for future predictions. Despite clearly improving efficiency, the lack of transparency in their predictions raises concerns about the degree of fairness of machine learning models, well highlighted by recent instances of algorithmic unfairness, from automated decisions on criminal recidivism to disease prediction. Increased user awareness of algorithmic fairness is met with a deficiency in systems guiding data analysts and practitioners in comprehending the implications of their outputs. To tackle the challenge of fairness interpretability, we propose FairnessFriend, a chatbot solution that combines data science with a human-computer interaction perspective. Given a dataset and a trained machine learning model with established fairness metrics, our system facilitates users in understanding these metrics and their significance in the context of the training data. FairnessFriend provides meanings for various statistical fairness metrics, and presents the resulting metrics values with detailed explanations, offering specific insights into their implications.
2024
Proceedings - 2024 IEEE 40th International Conference on Data Engineering Workshops, ICDEW 2024
chatbot
fairness
human-computer interaction
interpretability
machine learning
File in questo prodotto:
File Dimensione Formato  
Exploring_Fairness_Interpretability_with_FairnessFriend_A_Chatbot_Solution.pdf

Accesso riservato

: Publisher’s version
Dimensione 416.3 kB
Formato Adobe PDF
416.3 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1271645
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact