benchmark 2025

The Impact of Scaling Training Data on Adversarial Robustness

Marco Zimmerli , Andreas Plesner , Till Aczel , Roger Wattenhofer

0 citations · 48 references · arXiv

α

Published on arXiv

2509.25927

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Robustness improves logarithmically with training scale, but data quality and training paradigm are more decisive than raw scale, with a 10x data increase yielding only ~3.2% ASR reduction versus ~13.4% for a 10x model size increase.


Deep neural networks remain vulnerable to adversarial examples despite advances in architectures and training paradigms. We investigate how training data characteristics affect adversarial robustness across 36 state-of-the-art vision models spanning supervised, self-supervised, and contrastive learning approaches, trained on datasets from 1.2M to 22B images. Models were evaluated under six black-box attack categories: random perturbations, two types of geometric masks, COCO object manipulations, ImageNet-C corruptions, and ImageNet-R style shifts. Robustness follows a logarithmic scaling law with both data volume and model size: a tenfold increase in data reduces attack success rate (ASR) on average by ~3.2%, whereas a tenfold increase in model size reduces ASR on average by ~13.4%. Notably, some self-supervised models trained on curated datasets, such as DINOv2, outperform others trained on much larger but less curated datasets, challenging the assumption that scale alone drives robustness. Adversarial fine-tuning of ResNet50s improves generalization across structural variations but not across color distributions. Human evaluation reveals persistent gaps between human and machine vision. These results show that while scaling improves robustness, data quality, architecture, and training objectives play a more decisive role than raw scale in achieving broad-spectrum adversarial resilience.


Key Contributions

  • Establishes logarithmic scaling laws for adversarial robustness: 10x more training data reduces ASR by ~3.2%, while 10x more model parameters reduces ASR by ~13.4%
  • Demonstrates that data curation quality supersedes raw data volume for adversarial robustness, with curated self-supervised models (e.g., DINOv2) outperforming models trained on orders-of-magnitude larger uncurated datasets
  • Shows adversarial fine-tuning improves robustness to structural/geometric variations but fails to generalize across color distribution shifts, and documents persistent human-machine perception gaps

🛡️ Threat Analysis

Input Manipulation Attack

Paper evaluates models under six black-box adversarial attack categories (geometric masks, corruptions, style shifts, object manipulations) and studies adversarial fine-tuning as a defense — directly addressing input manipulation attacks and robustness at inference time.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
black_boxinference_time
Datasets
ImageNetImageNet-CImageNet-RCOCO
Applications
image classification