Artificial Intelligence (AI) has demonstrated superhuman capabilities in solving a significant number of tasks, leading to widespread industrial adoption. For in-field network-management application, AI-based solutions, however, have often risen skepticism among practitioners as their internal reasoning is not exposed and their decisions cannot be easily explained, preventing humans from trusting and even understanding them. To address this shortcoming, a new area in AI, called Explainable AI (XAI), is attracting the attention of both academic and industrial researchers. XAI is concerned with explaining and interpreting the internal reasoning and the outcome of AI-based models to achieve more trustable and practical deployment. In this work, we investigate the application of XAI for automated failure-cause identification in microwave networks. We first show how existing supervised ML algorithms can be used to solve the problem of failure-cause identification, achieving an accuracy around 94%. Then, we explore the application of well-known XAI frameworks (such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME)) to address important practical questions rising during the actual deployment of automated failure-cause identification in microwave networks. These questions, if answered, allow for a deeper understanding of the behavior of the ML algorithm adopted. Precisely, we exploit XAI to understand the main reasons leading to ML algorithm's decisions and to explain why the model makes identification errors over specific instances.

On Using Explainable Artificial Intelligence for Failure Identification in Microwave Networks

Ayoub, Omran;Musumeci, Francesco;Tornatore, Massimo
2022-01-01

Abstract

Artificial Intelligence (AI) has demonstrated superhuman capabilities in solving a significant number of tasks, leading to widespread industrial adoption. For in-field network-management application, AI-based solutions, however, have often risen skepticism among practitioners as their internal reasoning is not exposed and their decisions cannot be easily explained, preventing humans from trusting and even understanding them. To address this shortcoming, a new area in AI, called Explainable AI (XAI), is attracting the attention of both academic and industrial researchers. XAI is concerned with explaining and interpreting the internal reasoning and the outcome of AI-based models to achieve more trustable and practical deployment. In this work, we investigate the application of XAI for automated failure-cause identification in microwave networks. We first show how existing supervised ML algorithms can be used to solve the problem of failure-cause identification, achieving an accuracy around 94%. Then, we explore the application of well-known XAI frameworks (such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME)) to address important practical questions rising during the actual deployment of automated failure-cause identification in microwave networks. These questions, if answered, allow for a deeper understanding of the behavior of the ML algorithm adopted. Precisely, we exploit XAI to understand the main reasons leading to ML algorithm's decisions and to explain why the model makes identification errors over specific instances.
2022
25TH CONFERENCE ON INNOVATION IN CLOUDS, INTERNET AND NETWORKS (ICIN 2022)
978-1-7281-8688-7
File in questo prodotto:
File Dimensione Formato  
Ayoub_ICIN_2022.pdf

accesso aperto

Descrizione: Ayoub_ICIN_2022
: Pre-Print (o Pre-Refereeing)
Dimensione 1.43 MB
Formato Adobe PDF
1.43 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1212982
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 2
social impact