Deep Learning approaches are powerful tools in a great variety of classification tasks. However, they are limitedly accepted or trusted in clinical frameworks due to their typical "black box" outline: their architecture is well-known, but processes employed in classification are often inaccessible to humans. With this work, we explored the problem of "Explainable AI" (XAI) in Alzheimer's disease (AD) classification tasks. Data from a neuroimaging cohort (n = 251 from OASIS-3) of early-stage AD dementia and healthy controls (HC) were analysed. The MR scans were initially fed to a pre-trained DL model, which achieved good performance on the test set (AUC: 0.82, TPR: 0.78, TNR: 0.81). Results were then investigated by means of an XAI approach (Occlusion Sensitivity method) that provided measures of relevance (RV) as outcome. We compared RV values obtained within healthy tissues with those underlying white matter hyperintensity (WMH) lesions. The analysis was conducted on 4 different groups of data, obtained by stratifying correct and misclassified images according to the health condition of participants (AD/HC). Results highlighted that the DL model found favourable leveraging lesioned brain areas for AD identification. A statistically significant difference ( ) between WMH and healthy tissue contributions was indeed observed for AD recognition, differently from the HC case ( p=0.27). Clinical Relevance - This study, though preliminary, suggested that DL models might be trained to use known clinical information and reinforced the role of WMHs as neuroimaging biomarker for AD dementia. The outlined findings have a significant clinical relevance as they prepare the ground for a progressive increase in the level of trust laid in DL approaches.

Explainable AI Points to White Matter Hyperintensities for Alzheimer's Disease Identification: a Preliminary Study

Bordin, Valentina;Coluzzi, Davide;Baselli, Giuseppe
2022-01-01

Abstract

Deep Learning approaches are powerful tools in a great variety of classification tasks. However, they are limitedly accepted or trusted in clinical frameworks due to their typical "black box" outline: their architecture is well-known, but processes employed in classification are often inaccessible to humans. With this work, we explored the problem of "Explainable AI" (XAI) in Alzheimer's disease (AD) classification tasks. Data from a neuroimaging cohort (n = 251 from OASIS-3) of early-stage AD dementia and healthy controls (HC) were analysed. The MR scans were initially fed to a pre-trained DL model, which achieved good performance on the test set (AUC: 0.82, TPR: 0.78, TNR: 0.81). Results were then investigated by means of an XAI approach (Occlusion Sensitivity method) that provided measures of relevance (RV) as outcome. We compared RV values obtained within healthy tissues with those underlying white matter hyperintensity (WMH) lesions. The analysis was conducted on 4 different groups of data, obtained by stratifying correct and misclassified images according to the health condition of participants (AD/HC). Results highlighted that the DL model found favourable leveraging lesioned brain areas for AD identification. A statistically significant difference ( ) between WMH and healthy tissue contributions was indeed observed for AD recognition, differently from the HC case ( p=0.27). Clinical Relevance - This study, though preliminary, suggested that DL models might be trained to use known clinical information and reinforced the role of WMHs as neuroimaging biomarker for AD dementia. The outlined findings have a significant clinical relevance as they prepare the ground for a progressive increase in the level of trust laid in DL approaches.
2022
IEEE EMBC 2022
978-1-7281-2782-8
Brain
Humans
Magnetic Resonance Imaging
Neuroimaging
Alzheimer Disease
White Matter
File in questo prodotto:
File Dimensione Formato  
Bordin Coluzzi EMBC 2022 Explainable_AI_Points_to_White_Matter_Hyperintensities_for_Alzheimers_Disease_Identification_a_Preliminary_Study.pdf

Accesso riservato

Dimensione 1.22 MB
Formato Adobe PDF
1.22 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1224887
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? ND
social impact