Deep Learning is a promising approach to either automate or simplify several tasks in the healthcare domain. In this work, we introduce SegAN-CAT, an end-to-end approach to brain tumor segmentation in Magnetic Resonance Images (MRI), based on Adversarial Networks. In particular, we extend SegAN, successfully applied to the same task in a previous work, in two respects: (i) we used a different model input and (ii) we employed a modified loss function to train the model. We tested our approach on two large datasets, made available by the Brain Tumor Image Segmentation Benchmark (BraTS). First, we trained and tested some segmentation models assuming the availability of all the major MRI contrast modalities, i.e., T1-weighted, T1 weighted contrast enhanced, T2-weighted, and T2-FLAIR. However, as these four modalities are not always all available for each patient, we also trained and tested four segmentation models that take as input MRIs acquired with a single contrast modality. Finally, we proposed to apply transfer learning across different contrast modalities to improve the performance of these single-modality models. Our results are promising and show that not only SegAN-CAT is able to outperform SegAN when all the four modalities are available, but also that transfer learning can actually lead to better performances when only a single modality is available.
Brain MRI Tumor Segmentation with Adversarial Networks
Giacomello, Edoardo;Loiacono, Daniele;Mainardi, Luca
2020-01-01
Abstract
Deep Learning is a promising approach to either automate or simplify several tasks in the healthcare domain. In this work, we introduce SegAN-CAT, an end-to-end approach to brain tumor segmentation in Magnetic Resonance Images (MRI), based on Adversarial Networks. In particular, we extend SegAN, successfully applied to the same task in a previous work, in two respects: (i) we used a different model input and (ii) we employed a modified loss function to train the model. We tested our approach on two large datasets, made available by the Brain Tumor Image Segmentation Benchmark (BraTS). First, we trained and tested some segmentation models assuming the availability of all the major MRI contrast modalities, i.e., T1-weighted, T1 weighted contrast enhanced, T2-weighted, and T2-FLAIR. However, as these four modalities are not always all available for each patient, we also trained and tested four segmentation models that take as input MRIs acquired with a single contrast modality. Finally, we proposed to apply transfer learning across different contrast modalities to improve the performance of these single-modality models. Our results are promising and show that not only SegAN-CAT is able to outperform SegAN when all the four modalities are available, but also that transfer learning can actually lead to better performances when only a single modality is available.File | Dimensione | Formato | |
---|---|---|---|
1910.02717.pdf
accesso aperto
:
Pre-Print (o Pre-Refereeing)
Dimensione
469.77 kB
Formato
Adobe PDF
|
469.77 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.