attack 2025

Optimizing the Adversarial Perturbation with a Momentum-based Adaptive Matrix

Wei Tao 1,2, Sheng Long 1, Xin Liu 3, Wei Li 4, Qing Tao 5

0 citations · 63 references · TDSC

α

Published on arXiv

2512.14188

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

AdaMI's momentum-based adaptive matrix boosts adversarial transferability over state-of-the-art methods across different networks while achieving provably optimal convergence for convex problems.

AdaMI

Novel technique introduced


Generating adversarial examples (AEs) can be formulated as an optimization problem. Among various optimization-based attacks, the gradient-based PGD and the momentum-based MI-FGSM have garnered considerable interest. However, all these attacks use the sign function to scale their perturbations, which raises several theoretical concerns from the point of view of optimization. In this paper, we first reveal that PGD is actually a specific reformulation of the projected gradient method using only the current gradient to determine its step-size. Further, we show that when we utilize a conventional adaptive matrix with the accumulated gradients to scale the perturbation, PGD becomes AdaGrad. Motivated by this analysis, we present a novel momentum-based attack AdaMI, in which the perturbation is optimized with an interesting momentum-based adaptive matrix. AdaMI is proved to attain optimal convergence for convex problems, indicating that it addresses the non-convergence issue of MI-FGSM, thereby ensuring stability of the optimization process. The experiments demonstrate that the proposed momentum-based adaptive matrix can serve as a general and effective technique to boost adversarial transferability over the state-of-the-art methods across different networks while maintaining better stability and imperceptibility.


Key Contributions

  • Theoretical analysis unifying PGD and AdaGrad as instances of the projected gradient method with different adaptive matrices, explaining why sign-function-based attacks lack convergence guarantees
  • AdaMI: a novel momentum-based adaptive matrix attack that provably attains optimal convergence for convex problems, addressing the non-convergence issue of MI-FGSM
  • Demonstrates that the momentum-based adaptive matrix serves as a general plug-in to boost adversarial transferability over state-of-the-art methods while improving stability and imperceptibility

🛡️ Threat Analysis

Input Manipulation Attack

Proposes AdaMI, a novel gradient-based adversarial example generation method at inference time that improves adversarial transferability by replacing the sign function with a momentum-based adaptive matrix — a direct input manipulation attack on image classifiers.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxblack_boxinference_timeuntargeteddigital
Datasets
ImageNet
Applications
image classificationautonomous drivingintrusion detection