Chest X-Ray (CXR) is one of the most common diagnostic imaging used in everyday clinical practice. In this work, we present a Deep Learning (DL) approach to extract from CXR images a set of features that would capture as much information as possible. In particular, our aim is to extract highly general features that could be successfully used for a large variety of real-world classification tasks. Accordingly, we trained several $eta{-}$, Variational Autoencoder $(eta{-}$ VAE) models on CheXpert, a popular dataset consisting in a very broad and publicly available collection of labeled CXR images; through these models, high level features have been extracted and used to train Machine Learning (ML) classifier (Random Forest, K-Nearest Neighbours, Extremely Randomised Trees, Gradient Boosting), to classify CXR images thanks to the information extracted from the $eta{-}$ - VAEs; finally, trained classifiers have been combined in ensembles to improve the performances without the need of further training or models engineering. Despite, as expected, our approach does not achieve the same performance of the state-of-the-art models specifically devised for this classification task, our results are promising and show the viability of using the high level features extracted by the Autoencoders for classification tasks.
Chest X-Rays Image Classification from $eta{-}$ Variational Autoencoders Latent Features
Crespi L.;Loiacono D.;
2021-01-01
Abstract
Chest X-Ray (CXR) is one of the most common diagnostic imaging used in everyday clinical practice. In this work, we present a Deep Learning (DL) approach to extract from CXR images a set of features that would capture as much information as possible. In particular, our aim is to extract highly general features that could be successfully used for a large variety of real-world classification tasks. Accordingly, we trained several $eta{-}$, Variational Autoencoder $(eta{-}$ VAE) models on CheXpert, a popular dataset consisting in a very broad and publicly available collection of labeled CXR images; through these models, high level features have been extracted and used to train Machine Learning (ML) classifier (Random Forest, K-Nearest Neighbours, Extremely Randomised Trees, Gradient Boosting), to classify CXR images thanks to the information extracted from the $eta{-}$ - VAEs; finally, trained classifiers have been combined in ensembles to improve the performances without the need of further training or models engineering. Despite, as expected, our approach does not achieve the same performance of the state-of-the-art models specifically devised for this classification task, our results are promising and show the viability of using the high level features extracted by the Autoencoders for classification tasks.File | Dimensione | Formato | |
---|---|---|---|
Chest_X-Rays_Image_Classification_from_beta-_Variational_Autoencoders_Latent_Features.pdf
Accesso riservato
:
Publisher’s version
Dimensione
2.07 MB
Formato
Adobe PDF
|
2.07 MB | Adobe PDF | Visualizza/Apri |
11311-1204513_Loiacono.pdf
accesso aperto
:
Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione
455.16 kB
Formato
Adobe PDF
|
455.16 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.