Self-adaptive systems increasingly rely on black-box predictive models (e.g., Neural Networks) to make decisions and steer adaptations. The lack of transparency of these models makes it hard to explain adaptation decisions and their possible effects on the surrounding environment. Furthermore, adaptation decisions in this context are typically the outcome of expensive optimization processes. The complexity arises from the inability to directly observe or comprehend the internal mechanisms of the black-box predictive models, which requires employing iterative methods to explore a possibly large search space and optimize according to many goals. Here, balancing the trade-off between effectiveness and cost becomes a crucial challenge. In this paper, we propose explanation-driven self-adaptation, a novel approach that embeds model-agnostic interpretable machine learning techniques into the feedback loop to enhance the transparency of the predictive models and gain insights that help drive adaptation decisions effectively by significantly reducing the cost of planning them. Our empirical evaluation demonstrates the cost-effectiveness of our approach using two evaluation subjects in the robotics domain.

Explanation-driven Self-adaptation using Model-agnostic Interpretable Machine Learning

Camilli M.;
2024-01-01

Abstract

Self-adaptive systems increasingly rely on black-box predictive models (e.g., Neural Networks) to make decisions and steer adaptations. The lack of transparency of these models makes it hard to explain adaptation decisions and their possible effects on the surrounding environment. Furthermore, adaptation decisions in this context are typically the outcome of expensive optimization processes. The complexity arises from the inability to directly observe or comprehend the internal mechanisms of the black-box predictive models, which requires employing iterative methods to explore a possibly large search space and optimize according to many goals. Here, balancing the trade-off between effectiveness and cost becomes a crucial challenge. In this paper, we propose explanation-driven self-adaptation, a novel approach that embeds model-agnostic interpretable machine learning techniques into the feedback loop to enhance the transparency of the predictive models and gain insights that help drive adaptation decisions effectively by significantly reducing the cost of planning them. Our empirical evaluation demonstrates the cost-effectiveness of our approach using two evaluation subjects in the robotics domain.
2024
Proceedings - 2024 IEEE/ACM 19th Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS 2024
explainable self-adaptation
interpretable machine learning
model-agnostic explanations
File in questo prodotto:
File Dimensione Formato  
3643915.3644085.pdf

Accesso riservato

Dimensione 1.02 MB
Formato Adobe PDF
1.02 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1268383
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact