Adversarial Robustness in Transfer Learning Models
  • Author(s): Praveen Kumar Myakala
  • Paper ID: 1703680
  • Page: 772-779
  • Published Date: 31-07-2022
  • Published In: Iconic Research And Engineering Journals
  • Publisher: IRE Journals
  • e-ISSN: 2456-8880
  • Volume/Issue: Volume 6 Issue 1 July-2022
Abstract

Transfer learning has become a cornerstone tech- nique for adapting pre-trained models to diverse downstream tasks, significantly reducing data requirements. However, the extent to which adversarial robustness is retained or degraded during transfer learning remains unclear. This study systemati- cally evaluates the adversarial vulnerabilities of transfer learning models across fine-tuning strategies, such as full fine-tuning, layer freezing, and feature extraction. Our experiments, conducted on benchmark datasets, reveal that adversarial pretraining improves robustness by up to 25% under Projected Gradient Descent (PGD) and Fast Gradient Sign Method (FGSM) attacks com- pared to standard fine-tuning approaches. Additionally, freezing batch normalization layers during fine-tuning preserves robust- ness, likely due to the stabilization of learned feature distributions and prevention of gradient amplification. This study provides actionable insights for designing transfer learning pipelines that are not only accurate but also robust against adversarial threats, with implications for applications in healthcare, autonomous systems, and finance.

Keywords

Transfer Learning, Adversarial Robustness, Fine-tuning, Adversarial Attacks, Batch Normalization, Deep Learning

Citations

IRE Journals:
Praveen Kumar Myakala "Adversarial Robustness in Transfer Learning Models" Iconic Research And Engineering Journals Volume 6 Issue 1 2022 Page 772-779

IEEE:
Praveen Kumar Myakala "Adversarial Robustness in Transfer Learning Models" Iconic Research And Engineering Journals, 6(1)