defense 2025

Adversarial robustness through Lipschitz-Guided Stochastic Depth in Neural Networks

Laith Nayal 1, Mahmoud Mousatat 2, Bader Rasheed 1

0 citations

α

Published on arXiv

2509.10298

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Depth-dependent Lipschitz-guided DropPath maintains near-baseline clean accuracy on CIFAR-10 while improving robustness against FGSM, PGD-20, and AutoAttack and reducing FLOPs compared to standard linear DropPath on ViT-Tiny.

Lipschitz-Guided Stochastic Depth

Novel technique introduced


Deep neural networks and Vision Transformers achieve state-of-the-art performance in computer vision but are highly vulnerable to adversarial perturbations. Standard defenses often incur high computational cost or lack formal guarantees. We propose a Lipschitz-guided stochastic depth (DropPath) method, where drop probabilities increase with depth to control the effective Lipschitz constant of the network. This approach regularizes deeper layers, improving robustness while preserving clean accuracy and reducing computation. Experiments on CIFAR-10 with ViT-Tiny show that our custom depth-dependent schedule maintains near-baseline clean accuracy, enhances robustness under FGSM, PGD-20, and AutoAttack, and significantly reduces FLOPs compared to baseline and linear DropPath schedules.


Key Contributions

  • Derives a depth-dependent DropPath schedule p(l) = 1 − κ_target^(l/L) that bounds the expected Lipschitz constant of the network, providing a principled robustness guarantee.
  • Demonstrates that the proposed schedule improves robustness under FGSM, PGD-20, and AutoAttack on CIFAR-10 with ViT-Tiny while preserving near-baseline clean accuracy.
  • Shows the schedule reduces FLOPs compared to both the no-DropPath baseline and a conventional linear DropPath schedule.

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a defense (Lipschitz-guided stochastic depth scheduling) specifically evaluated against gradient-based adversarial input perturbation attacks — FGSM, PGD-20, and AutoAttack — at inference time on image classifiers.


Details

Domains
vision
Model Types
transformer
Threat Tags
white_boxinference_timeuntargeteddigital
Datasets
CIFAR-10
Applications
image classification