benchmark 2025

Defense That Attacks: How Robust Models Become Better Attackers

Mohamed Awad 1,2,3, Mahmoud Akrm 2, Walid Gomaa 2,3

0 citations · 36 references · arXiv

α

Published on arXiv

2512.02830

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Across 36 CNN and ViT models, adversarially trained surrogates generate adversarial examples that transfer significantly more effectively to adversarially trained target models than those produced by standard-trained surrogates.

MIG (Momentum Iterative Gradient)

Novel technique introduced


Deep learning has achieved great success in computer vision, but remains vulnerable to adversarial attacks. Adversarial training is the leading defense designed to improve model robustness. However, its effect on the transferability of attacks is underexplored. In this work, we ask whether adversarial training unintentionally increases the transferability of adversarial examples. To answer this, we trained a diverse zoo of 36 models, including CNNs and ViTs, and conducted comprehensive transferability experiments. Our results reveal a clear paradox: adversarially trained (AT) models produce perturbations that transfer more effectively than those from standard models, which introduce a new ecosystem risk. To enable reproducibility and further study, we release all models, code, and experimental scripts. Furthermore, we argue that robustness evaluations should assess not only the resistance of a model to transferred attacks but also its propensity to produce transferable adversarial examples.


Key Contributions

  • Large-scale model zoo of 36 adversarially trained CNN and ViT models publicly released for reproducibility
  • Empirical discovery that adversarially trained models consistently produce more transferable adversarial examples than standard models
  • Proposed transferability benchmark using MIG (ε=16) and argued that robustness evaluations should also measure a model's propensity to produce transferable attacks

🛡️ Threat Analysis

Input Manipulation Attack

The paper studies adversarial example transferability (a core ML01 concern), specifically how adversarial training reshapes gradient landscapes to inadvertently improve black-box transfer attack success across CNN and ViT architectures.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
black_boxwhite_boxinference_timeuntargeteddigital
Datasets
ImageNet
Applications
image classification