Dummy-Aware Weighted Attack (DAWA): Breaking the Safe Sink in Dummy Class Defenses
Yunrui Yu 1, Xuxiang Feng 2, Pengda Qin 3, Pengyang Wang 2, Kafeng Wang 1, Cheng-zhong Xu 2, Hang Su 1, Jun Zhu 1
Published on arXiv
2603.29182
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Reduces measured robustness of a leading Dummy Classes-based defense from 58.61% to 29.52% on CIFAR-10 under l_infinity perturbation (epsilon=8/255)
DAWA (Dummy-Aware Weighted Attack)
Novel technique introduced
Adversarial robustness evaluation faces a critical challenge as new defense paradigms emerge that can exploit limitations in existing assessment methods. This paper reveals that Dummy Classes-based defenses, which introduce an additional "dummy" class as a safety sink for adversarial examples, achieve significantly overestimated robustness under conventional evaluation strategies like AutoAttack. The fundamental limitation stems from these attacks' singular focus on misleading the true class label, which aligns perfectly with the defense mechanism--successful attacks are simply captured by the dummy class. To address this gap, we propose Dummy-Aware Weighted Attack (DAWA), a novel evaluation method that simultaneously targets both the true label and dummy label with adaptive weighting during adversarial example synthesis. Extensive experiments demonstrate that DAWA effectively breaks this defense paradigm, reducing the measured robustness of a leading Dummy Classes-based defense from 58.61% to 29.52% on CIFAR-10 under l_infty perturbation (epsilon=8/255). Our work provides a more reliable benchmark for evaluating this emerging class of defenses and highlights the need for continuous evolution of robustness assessment methodologies.
Key Contributions
- Identifies vulnerability in Dummy Classes-based defenses where conventional attacks like AutoAttack overestimate robustness
- Proposes DAWA attack with adaptive weighting that simultaneously targets true label and dummy label during adversarial example generation
- Demonstrates effectiveness by reducing measured robustness of leading dummy-class defense from 58.61% to 29.52% on CIFAR-10
🛡️ Threat Analysis
Proposes a gradient-based adversarial attack (DAWA) that crafts adversarial examples to evade a specific defense mechanism at inference time by targeting both true and dummy class labels.