The rise of Deep Learning and Convolutional Neural Networks has revolutionized Image Classification, leading to significant advancements in accuracy and efficiency. Despite this, these sophisticated models function as black-boxes, making it difficult to understand how decisions are made. This lack of transparency raises concerns about their reliability, ethical decision-making, and trustworthiness, especially in critical domains such as healthcare or autonomous driving. To address these issues, Explainable Artificial Intelligence (XAI) has emerged to elucidate AI decision processes. This chapter explores the advancements in XAI within the field of Image Classification, presenting a comprehensive overview of current methods and tools available to improve model transparency and trust. By examining these techniques, this chapter aims to provide practitioners with practical insights into understanding and mitigating bias in AI systems, ultimately promoting fairness and accountability in AI-driven decisions.
Foundational approaches to post-hoc explainability for image classification
De Santis, Antonio;Campi, Riccardo;Bianchi, Matteo;Tocchetti, Andrea;Brambilla, Marco
2025-01-01
Abstract
The rise of Deep Learning and Convolutional Neural Networks has revolutionized Image Classification, leading to significant advancements in accuracy and efficiency. Despite this, these sophisticated models function as black-boxes, making it difficult to understand how decisions are made. This lack of transparency raises concerns about their reliability, ethical decision-making, and trustworthiness, especially in critical domains such as healthcare or autonomous driving. To address these issues, Explainable Artificial Intelligence (XAI) has emerged to elucidate AI decision processes. This chapter explores the advancements in XAI within the field of Image Classification, presenting a comprehensive overview of current methods and tools available to improve model transparency and trust. By examining these techniques, this chapter aims to provide practitioners with practical insights into understanding and mitigating bias in AI systems, ultimately promoting fairness and accountability in AI-driven decisions.| File | Dimensione | Formato | |
|---|---|---|---|
|
LAWLESS02.pdf
Accesso riservato
:
Publisher’s version
Dimensione
3.2 MB
Formato
Adobe PDF
|
3.2 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


