attack 2025

Sequential Difference Maximization: Generating Adversarial Examples via Multi-Stage Optimization

Xinlei Liu 1, Tao Hu 1,2, Peng Yi 1,2, Weitao Han 1,2, Jichao Xie 1, Baolin Li 1

0 citations

α

Published on arXiv

2509.00826

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

SDM outperforms previous SOTA adversarial attack methods (including APGD and AutoAttack) in both attack success rate and attack cost-effectiveness on standard image classification benchmarks.

SDM (Sequential Difference Maximization)

Novel technique introduced


Efficient adversarial attack methods are critical for assessing the robustness of computer vision models. In this paper, we reconstruct the optimization objective for generating adversarial examples as "maximizing the difference between the non-true labels' probability upper bound and the true label's probability," and propose a gradient-based attack method termed Sequential Difference Maximization (SDM). SDM establishes a three-layer optimization framework of "cycle-stage-step." The processes between cycles and between iterative steps are respectively identical, while optimization stages differ in terms of loss functions: in the initial stage, the negative probability of the true label is used as the loss function to compress the solution space; in subsequent stages, we introduce the Directional Probability Difference Ratio (DPDR) loss function to gradually increase the non-true labels' probability upper bound by compressing the irrelevant labels' probabilities. Experiments demonstrate that compared with previous SOTA methods, SDM not only exhibits stronger attack performance but also achieves higher attack cost-effectiveness. Additionally, SDM can be combined with adversarial training methods to enhance their defensive effects. The code is available at https://github.com/X-L-Liu/SDM.


Key Contributions

  • Proposes Sequential Difference Maximization (SDM), a three-layer 'cycle-stage-step' optimization framework that reconstructs the adversarial objective as maximizing the gap between non-true label probability upper bound and true label probability.
  • Introduces the Directional Probability Difference Ratio (DPDR) loss function for sequential stages that compresses irrelevant label probabilities to raise the non-true label probability ceiling.
  • Identifies and characterizes the 'non-adversarial examples with high loss values' failure mode in prior methods, motivating the multi-stage optimization design.

🛡️ Threat Analysis

Input Manipulation Attack

Proposes SDM, a gradient-based adversarial example generation method using a multi-stage optimization framework (cycle-stage-step) with custom loss functions (DPDR) to cause misclassification at inference time — core adversarial attack on vision models.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_timeuntargeteddigital
Datasets
CIFAR-10
Applications
image classification