
<title>Abstract</title> Background This study evaluates StarGAN, a deep learning model designed to generate synthetic CT (sCT) images from MRI and CBCT data via a single model. The goal is to provide accurate Hounsfield Unit (HU) data for dose calculation and compare StarGAN's performance to CycleGAN. Methods StarGAN and CycleGAN were trained on a pelvic cancer dataset consisting of 23 training, 5 validation, and 5 testing cases. The evaluation involved qualitative and quantitative analyses, with a focus on synthetic image quality and dose distribution calculations. Results For sCT generated from CBCT, the StarGAN demonstrated superior anatomical preservation in qualitative evaluations. Quantitatively, CycleGAN exhibited lower mean absolute error (MAE) values for body (42.77 ± 4.28 HU), soft tissue (36.97 ± 3.87 HU), and bone (138.17 ± 20.29), whereas StarGAN presented higher MAE values (50.81 ± 5.16 HU, 44.57 ± 5.14 HU, 153.36 ± 27.67 HU, respectively). Dosimetric evaluations revealed a mean dose difference (DD) within 2% for planning the target volume (PTV) and body, with a gamma passing rate (GPR) > 90% under 2%/2 mm criteria. For sCT generated from MRI, qualitative evaluation also favored StarGAN's anatomical preservation. The CycleGAN resulted in lower MAEs (79.77 ± 13.96 HU, 70.14 ± 16.26 HU, and 253.62 ± 30.85 HU), whereas the StarGAN resulted in higher MAEs (94.65 ± 7.41 HU, 80.75 ± 9.60 HU, and 353.58 ± 34.85 HU). Both models achieved a mean DD within 2% in the PTV and body, and GPR > 90%. Conclusion While CycleGAN exhibited superior quantitative metrics, StarGAN was better in terms of anatomical preservation, highlighting its potential for sCT generation in radiotherapy.
DOI: https://doi.org/10.21203/rs.3.rs-5079041/v1
Publish Year: 2024