Due to major artifacts and uncalibrated Hounsfield units (HU), cone-beam computed tomography (CBCT) cannot be used readily for diagnostics and therapy planning purposes. This study addresses image-to-image translation by convolutional neural networks (CNNs) to convert CBCT to CT-like scans, comparing supervised to unsupervised training techniques, exploiting a pelvic CT/CBCT publicly available dataset. Interestingly, quantitative results were in favor of supervised against unsupervised approach showing improvements in the HU accuracy (62% vs. 50%), structural similarity index (2.5% vs. 1.1%) and peak signal-to-noise ratio (15% vs. 8%). Qualitative results conversely showcased higher anatomical artifacts in the synthetic CBCT generated by the supervised techniques. This was motivated by the higher sensitivity of the supervised training technique to the pixel-wise correspondence contained in the loss function. The unsupervised technique does not require correspondence and mitigates this drawback as it combines adversarial, cycle consistency, and identity loss functions. Overall, two main impacts qualify the paper: (a) the feasibility of CNN to generate accurate synthetic CT from CBCT images, which is fast and easy to use compared to traditional techniques applied in clinics; (b) the proposal of guidelines to drive the selection of the better training technique, which can be shifted to more general image-to-image translation.

Comparison of supervised and unsupervised approaches for the generation of synthetic ct from cone-beam ct

Rossi M.;Cerveri P.
2021-01-01

Abstract

Due to major artifacts and uncalibrated Hounsfield units (HU), cone-beam computed tomography (CBCT) cannot be used readily for diagnostics and therapy planning purposes. This study addresses image-to-image translation by convolutional neural networks (CNNs) to convert CBCT to CT-like scans, comparing supervised to unsupervised training techniques, exploiting a pelvic CT/CBCT publicly available dataset. Interestingly, quantitative results were in favor of supervised against unsupervised approach showing improvements in the HU accuracy (62% vs. 50%), structural similarity index (2.5% vs. 1.1%) and peak signal-to-noise ratio (15% vs. 8%). Qualitative results conversely showcased higher anatomical artifacts in the synthetic CBCT generated by the supervised techniques. This was motivated by the higher sensitivity of the supervised training technique to the pixel-wise correspondence contained in the loss function. The unsupervised technique does not require correspondence and mitigates this drawback as it combines adversarial, cycle consistency, and identity loss functions. Overall, two main impacts qualify the paper: (a) the feasibility of CNN to generate accurate synthetic CT from CBCT images, which is fast and easy to use compared to traditional techniques applied in clinics; (b) the proposal of guidelines to drive the selection of the better training technique, which can be shifted to more general image-to-image translation.
2021
CBCT
CT
CycleGAN
Image-to-image translation
Supervised training
Synthetic images
U-Net
Unsupervised training
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1184081
Citazioni
  • ???jsp.display-item.citation.pmc??? 3
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 9
social impact