attack 2025

A Versatile Framework for Designing Group-Sparse Adversarial Attacks

Alireza Heshmati , Saman Soleimani Roudi , Sajjad Amini , Shahrokh Ghaemmaghami , Farokh Marvasti

1 citations · 51 references · arXiv

α

Published on arXiv

2510.16637

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Achieves 100% attack success rate on CIFAR-10 and ImageNet while producing significantly sparser and more structurally coherent perturbations than state-of-the-art methods.

ATOS (Attack Through Overlapping Sparsity)

Novel technique introduced


Existing adversarial attacks often neglect perturbation sparsity, limiting their ability to model structural changes and to explain how deep neural networks (DNNs) process meaningful input patterns. We propose ATOS (Attack Through Overlapping Sparsity), a differentiable optimization framework that generates structured, sparse adversarial perturbations in element-wise, pixel-wise, and group-wise forms. For white-box attacks on image classifiers, we introduce the Overlapping Smoothed L0 (OSL0) function, which promotes convergence to a stationary point while encouraging sparse, structured perturbations. By grouping channels and adjacent pixels, ATOS improves interpretability and helps identify robust versus non-robust features. We approximate the L-infinity gradient using the logarithm of the sum of exponential absolute values to tightly control perturbation magnitude. On CIFAR-10 and ImageNet, ATOS achieves a 100% attack success rate while producing significantly sparser and more structurally coherent perturbations than prior methods. The structured group-wise attack highlights critical regions from the network's perspective, providing counterfactual explanations by replacing class-defining regions with robust features from the target class.


Key Contributions

  • ATOS framework generating element-wise, pixel-wise, and group-wise structured sparse adversarial perturbations via differentiable optimization
  • Overlapping Smoothed L0 (OSL0) surrogate function that promotes convergence to a stationary point while enforcing sparsity and structural coherence
  • Log-sum-of-exponentials approximation of the L∞ gradient for tight perturbation magnitude control, enabling interpretable counterfactual explanations

🛡️ Threat Analysis

Input Manipulation Attack

ATOS is a gradient-based adversarial perturbation attack targeting image classifiers at inference time, crafting structured sparse adversarial examples that cause misclassification — canonical ML01 input manipulation attack.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxinference_timetargeteddigital
Datasets
CIFAR-10ImageNet
Applications
image classification