attack 2025

Enhancing Adversarial Transferability by Balancing Exploration and Exploitation with Gradient-Guided Sampling

Zenghao Niu 1, Weicheng Xie 1,2,3, Siyang Song 4, Zitong Yu 5, Feng Liu 1, Linlin Shen 1

0 citations · 32 references · arXiv

α

Published on arXiv

2511.00411

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

GGS outperforms state-of-the-art transfer attacks across multiple DNN architectures and MLLMs by harmonizing exploration and exploitation through gradient-directed inner-iteration sampling

GGS (Gradient-Guided Sampling)

Novel technique introduced


Adversarial attacks present a critical challenge to deep neural networks' robustness, particularly in transfer scenarios across different model architectures. However, the transferability of adversarial attacks faces a fundamental dilemma between Exploitation (maximizing attack potency) and Exploration (enhancing cross-model generalization). Traditional momentum-based methods over-prioritize Exploitation, i.e., higher loss maxima for attack potency but weakened generalization (narrow loss surface). Conversely, recent methods with inner-iteration sampling over-prioritize Exploration, i.e., flatter loss surfaces for cross-model generalization but weakened attack potency (suboptimal local maxima). To resolve this dilemma, we propose a simple yet effective Gradient-Guided Sampling (GGS), which harmonizes both objectives through guiding sampling along the gradient ascent direction to improve both sampling efficiency and stability. Specifically, based on MI-FGSM, GGS introduces inner-iteration random sampling and guides the sampling direction using the gradient from the previous inner-iteration (the sampling's magnitude is determined by a random distribution). This mechanism encourages adversarial examples to reside in balanced regions with both flatness for cross-model generalization and higher local maxima for strong attack potency. Comprehensive experiments across multiple DNN architectures and multimodal large language models (MLLMs) demonstrate the superiority of our method over state-of-the-art transfer attacks. Code is made available at https://github.com/anuin-cat/GGS.


Key Contributions

  • Identifies the exploration-exploitation dilemma in transfer-based adversarial attacks, showing that momentum methods over-exploit while sampling methods over-explore
  • Proposes Gradient-Guided Sampling (GGS) that directs inner-iteration random sampling along the gradient ascent direction, yielding adversarial examples in regions with both loss flatness and high local maxima
  • Demonstrates state-of-the-art transfer attack performance on multiple DNN architectures and multimodal large language models

🛡️ Threat Analysis

Input Manipulation Attack

GGS is a gradient-based adversarial attack (built on MI-FGSM) that crafts adversarial examples at inference time to cause misclassification across diverse model architectures — a direct input manipulation attack targeting cross-model transferability.


Details

Domains
visionmultimodal
Model Types
cnntransformervlm
Threat Tags
white_boxblack_boxinference_timedigital
Applications
image classificationmultimodal large language models