Medical image generation using deep generative models has emerged as a promising tool for augmenting limited clinical datasets, enabling synthetic data generation for training and improving model generalization. However, adversarial robustness remains a major challenge, especially under noisy or domain-shifted conditions where models may produce artifacts that compromise diagnostic reliability. This study conducts a comprehensive comparison between two state-of-the-art frameworks?StyleGAN2 and Denoising Diffusion Probabilistic Models (DDPMs)?across noise, adversarial attacks, and modality shifts. Our experiments span over 120,000 medical images across chest X-rays and MRI scans. We observe that diffusion models exhibit a 22% higher structural fidelity and a 19% lower Fr?chet Inception Distance (FID) under Gaussian and adversarial perturbations. Additional robustness metrics show diffusion models outperform StyleGAN2 in both clean and noisy conditions, making them preferable for safety-critical applications.
IRE Journals:
Rajat Yadav, Arihand Sinha, Saumya Tripathi, Arvind Goel "Adversarially Robust Image Generation for Medical Imaging: Evaluating StyleGAN2 vs Diffusion Models under Noise and Domain Shift" Iconic Research And Engineering Journals Volume 7 Issue 1 2023 Page 758-762 https://doi.org/10.64388/IREV7I1-1711963
IEEE:
Rajat Yadav, Arihand Sinha, Saumya Tripathi, Arvind Goel
"Adversarially Robust Image Generation for Medical Imaging: Evaluating StyleGAN2 vs Diffusion Models under Noise and Domain Shift" Iconic Research And Engineering Journals, 7(1) https://doi.org/10.64388/IREV7I1-1711963