The rise of Deep Learning and Convolutional Neural Networks has revolutionized Image Classification, leading to significant advancements in accuracy and efficiency. Despite this, these sophisticated models function as black-boxes, making it difficult to understand how decisions are made. This lack of transparency raises concerns about their reliability, ethical decision-making, and trustworthiness, especially in critical domains such as healthcare or autonomous driving. To address these issues, Explainable Artificial Intelligence (XAI) has emerged to elucidate AI decision processes. This chapter explores the advancements in XAI within the field of Image Classification, presenting a comprehensive overview of current methods and tools available to improve model transparency and trust. By examining these techniques, this chapter aims to provide practitioners with practical insights into understanding and mitigating bias in AI systems, ultimately promoting fairness and accountability in AI-driven decisions.

Foundational approaches to post-hoc explainability for image classification

De Santis, Antonio;Campi, Riccardo;Bianchi, Matteo;Tocchetti, Andrea;Brambilla, Marco
2025-01-01

Abstract

The rise of Deep Learning and Convolutional Neural Networks has revolutionized Image Classification, leading to significant advancements in accuracy and efficiency. Despite this, these sophisticated models function as black-boxes, making it difficult to understand how decisions are made. This lack of transparency raises concerns about their reliability, ethical decision-making, and trustworthiness, especially in critical domains such as healthcare or autonomous driving. To address these issues, Explainable Artificial Intelligence (XAI) has emerged to elucidate AI decision processes. This chapter explores the advancements in XAI within the field of Image Classification, presenting a comprehensive overview of current methods and tools available to improve model transparency and trust. By examining these techniques, this chapter aims to provide practitioners with practical insights into understanding and mitigating bias in AI systems, ultimately promoting fairness and accountability in AI-driven decisions.
2025
Bi-directionality in Human-AI Collaborative Systems
9780443405532
File in questo prodotto:
File Dimensione Formato  
LAWLESS02.pdf

Accesso riservato

: Publisher’s version
Dimensione 3.2 MB
Formato Adobe PDF
3.2 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1292952
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact