The constant development of Artificial Intelligence models and tools made it necessary to validate their effectiveness and perceived trustability as critical characteristics for their application to everyday life scenarios. While most validation approaches focus on computing metrics and performing objective observations of a model's behavior, bidirectional human-in-the-loop approaches are rising in relevance as tools for subjective assessment of model quality and stability. Indeed, they aim to evaluate a model from the final user's perspective to tackle potential trust or understandability issues that may otherwise hinder its significance. Crowdsourcing plays a fundamental role in human involvement, allowing researchers and developers to engage a broad and heterogeneous crowd in such assessments as well as other aspects of the explainability cycle. While crowd involvement provides substantial benefits, improving user knowledge about AI-related concepts to achieve a comprehensive and accurate validation or data collection is still challenging. Thus, human-centered applications are frequently framed as gamified experiences in this context, striving to ease the knowledge barrier while making the experience more engaging and enjoyable. This article describes the fundamental gamification design approaches, their principles, their application to the field of Artificial Intelligence, and some of their successful applications, primarily in the field of Explainable AI and AI model transparency.

On the principles and effectiveness of gamification in bidirectional artificial intelligence and explainable AI

Tocchetti, Andrea;Bianchi, Matteo;Campi, Riccardo;De Santis, Antonio;Brambilla, Marco
2025-01-01

Abstract

The constant development of Artificial Intelligence models and tools made it necessary to validate their effectiveness and perceived trustability as critical characteristics for their application to everyday life scenarios. While most validation approaches focus on computing metrics and performing objective observations of a model's behavior, bidirectional human-in-the-loop approaches are rising in relevance as tools for subjective assessment of model quality and stability. Indeed, they aim to evaluate a model from the final user's perspective to tackle potential trust or understandability issues that may otherwise hinder its significance. Crowdsourcing plays a fundamental role in human involvement, allowing researchers and developers to engage a broad and heterogeneous crowd in such assessments as well as other aspects of the explainability cycle. While crowd involvement provides substantial benefits, improving user knowledge about AI-related concepts to achieve a comprehensive and accurate validation or data collection is still challenging. Thus, human-centered applications are frequently framed as gamified experiences in this context, striving to ease the knowledge barrier while making the experience more engaging and enjoyable. This article describes the fundamental gamification design approaches, their principles, their application to the field of Artificial Intelligence, and some of their successful applications, primarily in the field of Explainable AI and AI model transparency.
2025
Bi-directionality in Human-AI Collaborative Systems
9780443405532
File in questo prodotto:
File Dimensione Formato  
LAWLESS06.pdf

Accesso riservato

: Publisher’s version
Dimensione 628.4 kB
Formato Adobe PDF
628.4 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1292953
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact