Synthetic Aperture Radar (SAR) is a cutting-edge remote sensing technology that offers a unique perspective on the Earth's surface through its advanced microwave imaging, providing valuable insights into various aspects of the environment. However, SAR images are impacted by the speckle phenomenon, which acts as noise, hindering accurate interpretation of the scene and presenting a major challenge for SAR image analysis and understanding. Deep Learning has emerged as a powerful solution for despeckling SAR images, but acquiring large amounts of labeled data for training is a significant obstacle as obtaining ground truth in the SAR domain is not feasible. The proposed method overcomes this limitation with a novel unsupervised approach to single-look SAR image despeckling. Our method, CycleSAR, leverages the power of cycle-consistent generative adversarial networks (CycleGANs) to formulate the despeckling problem as an unpaired image-to-image translation, by effectively bypassing the need for ground truth data. Additionally, our method not only effectively reduces the speckle in single-look SAR images, but also enables, by construction, the simultaneous learning of a generative model to generate realistic speckled realizations of multi-look SAR images. The addition of a conditional variational autoencoder (CVAE) further enhances the method, enabling the one-to-many generation of speckled images and leading to an overall improvement in despeckling performance. The experimental results demonstrate the remarkable capabilities of the proposed method, CycleSAR, in providing high-quality despeckling and realistic speckle realizations. CycleSAR stands apart from existing state-of-the-art methods as it does not rely on data simulation or make any assumptions about the speckle distribution.

CycleSAR: SAR image despeckling as unpaired image-to-image translation

Francesco Lattari;Vincenzo Santomarco;Riccardo Santambrogio;Matteo Matteucci
2023-01-01

Abstract

Synthetic Aperture Radar (SAR) is a cutting-edge remote sensing technology that offers a unique perspective on the Earth's surface through its advanced microwave imaging, providing valuable insights into various aspects of the environment. However, SAR images are impacted by the speckle phenomenon, which acts as noise, hindering accurate interpretation of the scene and presenting a major challenge for SAR image analysis and understanding. Deep Learning has emerged as a powerful solution for despeckling SAR images, but acquiring large amounts of labeled data for training is a significant obstacle as obtaining ground truth in the SAR domain is not feasible. The proposed method overcomes this limitation with a novel unsupervised approach to single-look SAR image despeckling. Our method, CycleSAR, leverages the power of cycle-consistent generative adversarial networks (CycleGANs) to formulate the despeckling problem as an unpaired image-to-image translation, by effectively bypassing the need for ground truth data. Additionally, our method not only effectively reduces the speckle in single-look SAR images, but also enables, by construction, the simultaneous learning of a generative model to generate realistic speckled realizations of multi-look SAR images. The addition of a conditional variational autoencoder (CVAE) further enhances the method, enabling the one-to-many generation of speckled images and leading to an overall improvement in despeckling performance. The experimental results demonstrate the remarkable capabilities of the proposed method, CycleSAR, in providing high-quality despeckling and realistic speckle realizations. CycleSAR stands apart from existing state-of-the-art methods as it does not rely on data simulation or make any assumptions about the speckle distribution.
2023
2023 International Joint Conference on Neural Networks (IJCNN)
978-1-6654-8867-9
Synthetic Aperture Radar, despeckling, deep learning, unpaired image-to-image translation, generative adversarial networks
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1249559
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact