defense 2025

The Power of Many: Synergistic Unification of Diverse Augmentations for Efficient Adversarial Robustness

Wang Yu-Hang , Shiwei Li , Jianxiang Liao , Li Bohan , Jian Liu , Wenfei Yin

0 citations

α

Published on arXiv

2508.03213

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

UAA achieves new SOTA for augmentation-based adversarial defense without requiring online adversarial example generation during training, reducing computational overhead while improving robustness.

Universal Adversarial Augmenter (UAA)

Novel technique introduced


Adversarial perturbations pose a significant threat to deep learning models. Adversarial Training (AT), the predominant defense method, faces challenges of high computational costs and a degradation in standard performance. While data augmentation offers an alternative path, existing techniques either yield limited robustness gains or incur substantial training overhead. Therefore, developing a defense mechanism that is both highly efficient and strongly robust is of paramount importance.In this work, we first conduct a systematic analysis of existing augmentation techniques, revealing that the synergy among diverse strategies -- rather than any single method -- is crucial for enhancing robustness. Based on this insight, we propose the Universal Adversarial Augmenter (UAA) framework, which is characterized by its plug-and-play nature and training efficiency. UAA decouples the expensive perturbation generation process from model training by pre-computing a universal transformation offline, which is then used to efficiently generate unique adversarial perturbations for each sample during training.Extensive experiments conducted on multiple benchmarks validate the effectiveness of UAA. The results demonstrate that UAA establishes a new state-of-the-art (SOTA) for data-augmentation-based adversarial defense strategies , without requiring the online generation of adversarial examples during training. This framework provides a practical and efficient pathway for building robust models,Our code is available in the supplementary materials.


Key Contributions

  • Systematic empirical analysis showing that synergy among diverse augmentation strategies — not any single method — is the key driver of adversarial robustness gains
  • Universal Adversarial Augmenter (UAA) framework that decouples perturbation generation from model training by pre-computing universal transformations offline, eliminating costly online adversarial example generation
  • Establishes new SOTA among data-augmentation-based adversarial defenses on multiple benchmarks with significantly lower training overhead than standard adversarial training

🛡️ Threat Analysis

Input Manipulation Attack

Directly defends against adversarial input perturbations — the Universal Adversarial Augmenter (UAA) framework is an adversarial training defense that efficiently generates adversarial perturbations per sample to improve model robustness against input manipulation attacks at inference time.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxblack_boxinference_timedigital
Applications
image classification