Public organizations are increasingly relying on AI algorithms for a wide range of uses. These AI algorithms are increasingly being seen as socio-technical and multi-actor, thus the result of the interactions between many stakeholders in a dynamic and continuous process. One specific challenge of this interaction is the “obscurity” of AI for non-technical people, with consequences such as the reluctance to adopt or the emergence of problems, when AI is developed. Explainable AI (XAI) is seen as a solution to open the black box nature of AI algorithms. However, scholars highlight that XAI is often approached with a technical perspective and show the necessity to consider explainability from the perspective of different stakeholders and in a more process-oriented way. This study aims to build on previous studies by unpacking the process perspective on XAI within a complex public process. It aims to show the critical aspects of ensuring explainability within complex public processes. The study does so through an action research approach. The study shows that core aspects when considering XAI from a process-oriented perspective are reciprocal learning processes, the increased sharing of contrasting viewpoints, and more time and resources investment compared to traditional AI development processes, but potentially better results in terms of performance.

Exploring Explainable Artificial Intelligence in Complex Public Processes

F. J. van Krimpen;M. Arnaboldi;L. Querini
2023-01-01

Abstract

Public organizations are increasingly relying on AI algorithms for a wide range of uses. These AI algorithms are increasingly being seen as socio-technical and multi-actor, thus the result of the interactions between many stakeholders in a dynamic and continuous process. One specific challenge of this interaction is the “obscurity” of AI for non-technical people, with consequences such as the reluctance to adopt or the emergence of problems, when AI is developed. Explainable AI (XAI) is seen as a solution to open the black box nature of AI algorithms. However, scholars highlight that XAI is often approached with a technical perspective and show the necessity to consider explainability from the perspective of different stakeholders and in a more process-oriented way. This study aims to build on previous studies by unpacking the process perspective on XAI within a complex public process. It aims to show the critical aspects of ensuring explainability within complex public processes. The study does so through an action research approach. The study shows that core aspects when considering XAI from a process-oriented perspective are reciprocal learning processes, the increased sharing of contrasting viewpoints, and more time and resources investment compared to traditional AI development processes, but potentially better results in terms of performance.
2023
NIG Annual Work Conference 2023
Artificial Intelligence; Explainable artificial intelligence; complex public processes; stakeholders
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1253561
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact