defense 2025

DARD: Dice Adversarial Robustness Distillation against Adversarial Attacks

Jing Zou 1, Shungeng Zhang 1, Meikang Qiu 1, Chong Li 2

0 citations

α

Published on arXiv

2509.11525

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

DARD-trained ResNet-18 outperforms adversarially trained baselines of the same architecture on both robust and standard accuracy across CIFAR-10 and CIFAR-100.

DARD (Dice Adversarial Robustness Distillation)

Novel technique introduced


Deep learning models are vulnerable to adversarial examples, posing critical security challenges in real-world applications. While Adversarial Training (AT ) is a widely adopted defense mechanism to enhance robustness, it often incurs a trade-off by degrading performance on unperturbed, natural data. Recent efforts have highlighted that larger models exhibit enhanced robustness over their smaller counterparts. In this paper, we empirically demonstrate that such robustness can be systematically distilled from large teacher models into compact student models. To achieve better performance, we introduce Dice Adversarial Robustness Distillation (DARD), a novel method designed to transfer robustness through a tailored knowledge distillation paradigm. Additionally, we propose Dice Projected Gradient Descent (DPGD), an adversarial example generalization method optimized for effective attack. Our extensive experiments demonstrate that the DARD approach consistently outperforms adversarially trained networks with the same architecture, achieving superior robustness and standard accuracy.


Key Contributions

  • DARD: a knowledge distillation framework that transfers adversarial robustness from large adversarially-trained teachers to compact student models via dual supervision from natural and adversarial soft labels
  • DPGD: adaptation of Dice Projected Gradient Descent from semantic segmentation to image classification with dynamic step-size tuning and channel-wise gradient masking
  • Empirical demonstration that DARD-trained lightweight models (ResNet-18) achieve superior robust accuracy on CIFAR-10 and CIFAR-100 over adversarially trained baselines of equivalent architecture

🛡️ Threat Analysis

Input Manipulation Attack

Primary contribution is DARD, a defense against adversarial examples using robustness distillation; secondary contribution DPGD is a gradient-based adversarial attack — both directly address inference-time input manipulation attacks.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxinference_timedigitaluntargeted
Datasets
CIFAR-10CIFAR-100
Applications
image classification