Artificial intelligence algorithms – and data that feed them – are increasingly imbued with agency and impact and are empowered to make decisions in our lives in a wide variety of domains: from search engines, information filtering, political campaigns, health to the prediction of criminal recidivism or loan repayment. Indeed, algorithms are difficult to understand, and explaining “how they exercise their power and influence” and how a given input (whether or not consciously released) is transformed into an output. In the computer science field, techniques of explainable artificial intelligence (XAI) have been developed for disclosing and studying algorithmic models, using data visualization as visual language to let experts explore their inner workings. However, current research on machine learning explainability empowers the creators of machine learning models but is not addressing the needs of people affected by them. This paper leverages on communication and information design methods (or competences) to expand the explainable machine learning field of action towards the general public.

Explaining AI Through Critical Reflection Artifacts. On the Role of Communication Design Within XAI

Beatrice Gobbo
2021-01-01

Abstract

Artificial intelligence algorithms – and data that feed them – are increasingly imbued with agency and impact and are empowered to make decisions in our lives in a wide variety of domains: from search engines, information filtering, political campaigns, health to the prediction of criminal recidivism or loan repayment. Indeed, algorithms are difficult to understand, and explaining “how they exercise their power and influence” and how a given input (whether or not consciously released) is transformed into an output. In the computer science field, techniques of explainable artificial intelligence (XAI) have been developed for disclosing and studying algorithmic models, using data visualization as visual language to let experts explore their inner workings. However, current research on machine learning explainability empowers the creators of machine learning models but is not addressing the needs of people affected by them. This paper leverages on communication and information design methods (or competences) to expand the explainable machine learning field of action towards the general public.
2021
Advanced Visual Interfaces. Supporting Artificial Intelligence and Big Data Applications
978-3-030-68006-0
978-3-030-68007-7
File in questo prodotto:
File Dimensione Formato  
Gobbo_ExplainingAIThroughCriticalReflectiveArtifacts.pdf

Accesso riservato

: Pre-Print (o Pre-Refereeing)
Dimensione 348.5 kB
Formato Adobe PDF
348.5 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1161058
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact