defense 2025

Bridging Symmetry and Robustness: On the Role of Equivariance in Enhancing Adversarial Robustness

Longwei Wang 1, Ifrat Ikhtear Uddin 1, KC Santosh 1, Chaowei Zhang 2, Xiao Qin 3, Yang Zhou 3

3 citations · 111 references · arXiv

α

Published on arXiv

2510.16171

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Parallel equivariant CNN with combined rotation and scale branches significantly outperforms standard CNNs on adversarial accuracy under FGSM and PGD attacks on CIFAR-10/100 without adversarial training.

Equivariant-CNN (parallel and cascaded designs)

Novel technique introduced


Adversarial examples reveal critical vulnerabilities in deep neural networks by exploiting their sensitivity to imperceptible input perturbations. While adversarial training remains the predominant defense strategy, it often incurs significant computational cost and may compromise clean-data accuracy. In this work, we investigate an architectural approach to adversarial robustness by embedding group-equivariant convolutions-specifically, rotation- and scale-equivariant layers-into standard convolutional neural networks (CNNs). These layers encode symmetry priors that align model behavior with structured transformations in the input space, promoting smoother decision boundaries and greater resilience to adversarial attacks. We propose and evaluate two symmetry-aware architectures: a parallel design that processes standard and equivariant features independently before fusion, and a cascaded design that applies equivariant operations sequentially. Theoretically, we demonstrate that such models reduce hypothesis space complexity, regularize gradients, and yield tighter certified robustness bounds under the CLEVER (Cross Lipschitz Extreme Value for nEtwork Robustness) framework. Empirically, our models consistently improve adversarial robustness and generalization across CIFAR-10, CIFAR-100, and CIFAR-10C under both FGSM and PGD attacks, without requiring adversarial training. These findings underscore the potential of symmetry-enforcing architectures as efficient and principled alternatives to data augmentation-based defenses.


Key Contributions

  • Theoretical analysis showing equivariant architectures contract hypothesis space, regularize gradients, and yield tighter CLEVER certified robustness bounds
  • Two symmetry-aware CNN designs (parallel and cascaded) integrating rotation- and scale-equivariant convolutional layers with concatenation and weighted-summation fusion strategies
  • Empirical validation on CIFAR-10, CIFAR-100, and CIFAR-10C showing improved adversarial accuracy under FGSM and PGD without adversarial training

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a defense against adversarial input manipulation attacks (FGSM, PGD) by embedding group-equivariant convolutions to regularize gradients and smooth decision boundaries, providing certified robustness bounds via CLEVER framework.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxinference_timeuntargeteddigital
Datasets
CIFAR-10CIFAR-100CIFAR-10C
Applications
image classification