The increased complexity and intelligence of automation systems require the development of intelligent fault diagnosis (IFD) methodologies. By relying on the concept of a suspected space, this study develops explainable data-driven IFD approaches for nonlinear dynamic systems. More specifically, we parameterize nonlinear systems through a generalized kernel representation for system modeling and the associated fault diagnosis. An important result obtained is a unified form of kernel representations, applicable to both unsupervised and supervised learning. More importantly, through a rigorous theoretical analysis, we discover the existence of a bridge (i.e., a bijective mapping) between some supervised and unsupervised learning-based entities. Notably, the designed IFD approaches achieve the same performance with the use of this bridge. In order to have a better understanding of the results obtained, both unsupervised and supervised neural networks are chosen as the learning tools to identify the generalized kernel representations and design the IFD schemes; an invertible neural network is then employed to build the bridge between them. This article is a perspective article, whose contribution lies in proposing and formalizing the fundamental concepts for explainable intelligent learning methods, contributing to system modeling and data-driven IFD designs for nonlinear dynamic systems.

Explainable Intelligent Fault Diagnosis for Nonlinear Dynamic Systems: From Unsupervised to Supervised Learning

Alippi, Cesare;
2022-01-01

Abstract

The increased complexity and intelligence of automation systems require the development of intelligent fault diagnosis (IFD) methodologies. By relying on the concept of a suspected space, this study develops explainable data-driven IFD approaches for nonlinear dynamic systems. More specifically, we parameterize nonlinear systems through a generalized kernel representation for system modeling and the associated fault diagnosis. An important result obtained is a unified form of kernel representations, applicable to both unsupervised and supervised learning. More importantly, through a rigorous theoretical analysis, we discover the existence of a bridge (i.e., a bijective mapping) between some supervised and unsupervised learning-based entities. Notably, the designed IFD approaches achieve the same performance with the use of this bridge. In order to have a better understanding of the results obtained, both unsupervised and supervised neural networks are chosen as the learning tools to identify the generalized kernel representations and design the IFD schemes; an invertible neural network is then employed to build the bridge between them. This article is a perspective article, whose contribution lies in proposing and formalizing the fundamental concepts for explainable intelligent learning methods, contributing to system modeling and data-driven IFD designs for nonlinear dynamic systems.
2022
Nonlinear dynamical systems
Neural networks
Measurement
Fault diagnosis
Bridges
Kernel
Unsupervised learning
Intelligent fault diagnosis (IFD)
neural networks
nonlinear dynamic systems
supervised learning
bridge
unsupervised learning
File in questo prodotto:
File Dimensione Formato  
Explainable_Intelligent_Fault_Diagnosis_for_Nonlinear_Dynamic_Systems_From_Unsupervised_to_Supervised_Learning.pdf

accesso aperto

Dimensione 1.21 MB
Formato Adobe PDF
1.21 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1233689
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 71
  • ???jsp.display-item.citation.isi??? 61
social impact