Convolutional neural networks (CNNs) are considered black boxes due to their robust nonlinear fitting capability. In the context of fault diagnosis for rotating machinery, it may happen that a standard CNN makes a final decision based on a mixture of significant and insignificant features, therefore, it is required to establish a trustworthy intelligence fault diagnosis model with the controllable feature learning capability to identify fault types. In this paper, an explainable intelligence fault diagnosis framework is proposed to recognize the fault signals, using data obtained through short-time Fourier transformation, which is easily modified from a standard CNN. The post hoc explanation method is used to visualize the features the model learned from a signal. The experimental results show that the proposed explainable intelligence fault diagnosis framework provides 100% testing accuracy and visualizations, the Average Drop and the Average Increase from a classification activation mappings method demonstrate the interpretability of the proposed framework.
An explainable intelligence fault diagnosis framework for rotating machinery
Yang, Daoguang;Karimi, Hamid Reza;
2023-01-01
Abstract
Convolutional neural networks (CNNs) are considered black boxes due to their robust nonlinear fitting capability. In the context of fault diagnosis for rotating machinery, it may happen that a standard CNN makes a final decision based on a mixture of significant and insignificant features, therefore, it is required to establish a trustworthy intelligence fault diagnosis model with the controllable feature learning capability to identify fault types. In this paper, an explainable intelligence fault diagnosis framework is proposed to recognize the fault signals, using data obtained through short-time Fourier transformation, which is easily modified from a standard CNN. The post hoc explanation method is used to visualize the features the model learned from a signal. The experimental results show that the proposed explainable intelligence fault diagnosis framework provides 100% testing accuracy and visualizations, the Average Drop and the Average Increase from a classification activation mappings method demonstrate the interpretability of the proposed framework.File | Dimensione | Formato | |
---|---|---|---|
1-s2.0-S0925231223003806-main.pdf
accesso aperto
:
Publisher’s version
Dimensione
4.07 MB
Formato
Adobe PDF
|
4.07 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.