This study introduces an interpretable machine learning (ML) framework tailored for the semiconductor manufacturing industry, with a strong focus on ensuring model transparency and understandability. In a domain where manufacturing efficiency and product quality are of utmost importance, our research introduces bespoke ML models designed to predict product quality with remarkable accuracy while elucidating the factors driving these predictions. Indeed, the pervasive challenge of model opacity impedes the manufacturing industry to fully leverage ML advancements for operational excellence. To address this critical gap, our study introduces an interpretable ML framework. This framework not only enhances model transparency and understandability but also ensures the precision of product quality predictions. Central to our approach is the application of LIME (Local Interpretable Model-agnostic Explanations), which demystifies the predictive mechanisms of ML models. By elucidating the underlying factors influencing product quality predictions, our methodology empowers operation managers with actionable insights for preemptive quality control and process optimization. Utilizing the UCI SECOM dataset, this paper exemplifies how interpretability in ML transcends conventional analytics, facilitating informed decision-making and fostering a culture of operational excellence.

Advancing Manufacturing with Interpretable Machine Learning: LIME-Driven Insights from the SECOM Dataset

Presciuttini A.;Cantini A.;Portioli-Staudacher A.
2024-01-01

Abstract

This study introduces an interpretable machine learning (ML) framework tailored for the semiconductor manufacturing industry, with a strong focus on ensuring model transparency and understandability. In a domain where manufacturing efficiency and product quality are of utmost importance, our research introduces bespoke ML models designed to predict product quality with remarkable accuracy while elucidating the factors driving these predictions. Indeed, the pervasive challenge of model opacity impedes the manufacturing industry to fully leverage ML advancements for operational excellence. To address this critical gap, our study introduces an interpretable ML framework. This framework not only enhances model transparency and understandability but also ensures the precision of product quality predictions. Central to our approach is the application of LIME (Local Interpretable Model-agnostic Explanations), which demystifies the predictive mechanisms of ML models. By elucidating the underlying factors influencing product quality predictions, our methodology empowers operation managers with actionable insights for preemptive quality control and process optimization. Utilizing the UCI SECOM dataset, this paper exemplifies how interpretability in ML transcends conventional analytics, facilitating informed decision-making and fostering a culture of operational excellence.
2024
IFIP Advances in Information and Communication Technology
9783031716287
9783031716294
Manufacturing Systems
Artificial Intelligence
Machine Learning Interpretability
File in questo prodotto:
File Dimensione Formato  
LIME_APMS.pdf

accesso aperto

: Publisher’s version
Dimensione 1.42 MB
Formato Adobe PDF
1.42 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1278986
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact