defense 2025

Sparse Representations Improve Adversarial Robustness of Neural Network Classifiers

Killian Steunou 1, Théo Druilhe 2, Sigurd Saue 2

0 citations · 19 references · arXiv

α

Published on arXiv

2509.21130

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

SPCA-based classifiers consistently degrade more gracefully than PCA-based classifiers under strong white-box and black-box attacks, with theory showing certified radius is proportional to the dual norm of W⊤u which shrinks as sparsity increases.

SPCA (Sparse PCA adversarial defense)

Novel technique introduced


Deep neural networks perform remarkably well on image classification tasks but remain vulnerable to carefully crafted adversarial perturbations. This work revisits linear dimensionality reduction as a simple, data-adapted defense. We empirically compare standard Principal Component Analysis (PCA) with its sparse variant (SPCA) as front-end feature extractors for downstream classifiers, and we complement these experiments with a theoretical analysis. On the theory side, we derive exact robustness certificates for linear heads applied to SPCA features: for both $\ell_\infty$ and $\ell_2$ threat models (binary and multiclass), the certified radius grows as the dual norms of $W^\top u$ shrink, where $W$ is the projection and $u$ the head weights. We further show that for general (non-linear) heads, sparsity reduces operator-norm bounds through a Lipschitz composition argument, predicting lower input sensitivity. Empirically, with a small non-linear network after the projection, SPCA consistently degrades more gracefully than PCA under strong white-box and black-box attacks while maintaining competitive clean accuracy. Taken together, the theory identifies the mechanism (sparser projections reduce adversarial leverage) and the experiments verify that this benefit persists beyond the linear setting. Our code is available at https://github.com/killian31/SPCARobustness.


Key Contributions

  • Exact robustness certificates for linear heads applied to SPCA features under ℓ∞ and ℓ2 threat models, showing certified radius grows as dual norms of W⊤u shrink with sparsity
  • Lipschitz composition argument bounding end-to-end sensitivity for non-linear heads, showing sparsity in W tightens operator-norm bounds
  • Systematic empirical comparison of PCA vs. SPCA on MNIST and CIFAR-binary tasks under white-box and black-box attacks, demonstrating SPCA degrades more gracefully while maintaining competitive clean accuracy

🛡️ Threat Analysis

Input Manipulation Attack

The paper directly defends against adversarial input perturbations at inference time (ℓ∞ and ℓ2 threat models). SPCA is proposed as a defense front-end that reduces adversarial leverage by contracting dual/operator norms; certified radii and empirical robustness under FGSM/PGD-style white-box and black-box attacks are the central experimental contribution.


Details

Domains
vision
Model Types
cnntraditional_ml
Threat Tags
white_boxblack_boxinference_timeuntargeted
Datasets
MNISTCIFAR-10
Applications
image classification