Current Volume 8
Transfer learning has become a cornerstone tech- nique for adapting pre-trained models to diverse downstream tasks, significantly reducing data requirements. However, the extent to which adversarial robustness is retained or degraded during transfer learning remains unclear. This study systemati- cally evaluates the adversarial vulnerabilities of transfer learning models across fine-tuning strategies, such as full fine-tuning, layer freezing, and feature extraction. Our experiments, conducted on benchmark datasets, reveal that adversarial pretraining improves robustness by up to 25% under Projected Gradient Descent (PGD) and Fast Gradient Sign Method (FGSM) attacks com- pared to standard fine-tuning approaches. Additionally, freezing batch normalization layers during fine-tuning preserves robust- ness, likely due to the stabilization of learned feature distributions and prevention of gradient amplification. This study provides actionable insights for designing transfer learning pipelines that are not only accurate but also robust against adversarial threats, with implications for applications in healthcare, autonomous systems, and finance.
Transfer Learning, Adversarial Robustness, Fine-tuning, Adversarial Attacks, Batch Normalization, Deep Learning
IRE Journals:
Praveen Kumar Myakala
"Adversarial Robustness in Transfer Learning Models" Iconic Research And Engineering Journals Volume 6 Issue 1 2022 Page 772-779
IEEE:
Praveen Kumar Myakala
"Adversarial Robustness in Transfer Learning Models" Iconic Research And Engineering Journals, 6(1)