Machine Learning (ML) is being widely investigated to automate safety-critical tasks in optical-network management. However, in some cases, decisions taken by ML models are hard to interpret, motivate and trust, and this lack of explainability complicates ML adoption in network management. The rising field of Explainable Artificial Intelligence (XAI) tries to uncover the reasoning behind the decision-making of complex ML models, offering end-users a stronger sense of trust towards ML-Automated decisions. In this paper we showcase an application of XAI, focusing on fault localization, and analyze the reasoning of the ML model, trained on real Optical Signal-To-Noise Ratio measurements, in two scenarios. In the first scenario we use measurements from a single monitor at the receiver, while in the second we also use measurements from multiple monitors along the path. With XAI, we show that additional monitors allow network operators to better understand model's behavior, making ML model more trustable and, hence, more practically adoptable.

If Not Here, There. Explaining Machine Learning Models for Fault Localization in Optical Networks

Karandin O.;Ayoub O.;Musumeci F.;Tornatore M.
2022-01-01

Abstract

Machine Learning (ML) is being widely investigated to automate safety-critical tasks in optical-network management. However, in some cases, decisions taken by ML models are hard to interpret, motivate and trust, and this lack of explainability complicates ML adoption in network management. The rising field of Explainable Artificial Intelligence (XAI) tries to uncover the reasoning behind the decision-making of complex ML models, offering end-users a stronger sense of trust towards ML-Automated decisions. In this paper we showcase an application of XAI, focusing on fault localization, and analyze the reasoning of the ML model, trained on real Optical Signal-To-Noise Ratio measurements, in two scenarios. In the first scenario we use measurements from a single monitor at the receiver, while in the second we also use measurements from multiple monitors along the path. With XAI, we show that additional monitors allow network operators to better understand model's behavior, making ML model more trustable and, hence, more practically adoptable.
2022
2022 International Conference on Optical Network Design and Modeling, ONDM 2022
978-3-903176-44-7
fault localization
ML
network management
Optical networks
SHAP
XAI
File in questo prodotto:
File Dimensione Formato  
1570790946 paper.pdf

accesso aperto

: Pre-Print (o Pre-Refereeing)
Dimensione 773.97 kB
Formato Adobe PDF
773.97 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1227668
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 4
social impact