attack 2026

When and Where to Attack? Stage-wise Attention-Guided Adversarial Attack on Large Vision Language Models

Jaehyun Kwak 1, Nam Cao 1, Boryeong Cho 1, Segyu Lee 1, Sumyeong Ahn 2, Se-Young Yun 1

0 citations · 33 references · arXiv (Cornell University)

α

Published on arXiv

2602.04356

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SAGA achieves state-of-the-art attack success rates across ten LVLMs while producing more imperceptible adversarial examples than prior input-transformation-based attacks by efficiently targeting high-attention image regions.

SAGA (Stage-wise Attention-Guided Attack)

Novel technique introduced


Adversarial attacks against Large Vision-Language Models (LVLMs) are crucial for exposing safety vulnerabilities in modern multimodal systems. Recent attacks based on input transformations, such as random cropping, suggest that spatially localized perturbations can be more effective than global image manipulation. However, randomly cropping the entire image is inherently stochastic and fails to use the limited per-pixel perturbation budget efficiently. We make two key observations: (i) regional attention scores are positively correlated with adversarial loss sensitivity, and (ii) attacking high-attention regions induces a structured redistribution of attention toward subsequent salient regions. Based on these findings, we propose Stage-wise Attention-Guided Attack (SAGA), an attention-guided framework that progressively concentrates perturbations on high-attention regions. SAGA enables more efficient use of constrained perturbation budgets, producing highly imperceptible adversarial examples while consistently achieving state-of-the-art attack success rates across ten LVLMs. The source code is available at https://github.com/jackwaky/SAGA.


Key Contributions

  • Empirically establishes that regional attention scores are positively correlated with adversarial loss sensitivity, motivating attention-focused perturbation strategies.
  • Discovers that attacking high-attention regions induces structured redistribution of model attention toward subsequent salient regions, enabling a staged attack curriculum.
  • Proposes SAGA, a stage-wise attention-guided adversarial attack that progressively concentrates the perturbation budget on high-attention regions, achieving SOTA attack success rates across ten LVLMs with improved imperceptibility.

🛡️ Threat Analysis

Input Manipulation Attack

SAGA crafts adversarial visual perturbations guided by attention scores to cause misclassification/manipulation in LVLMs at inference time — a direct adversarial example attack with gradient-based optimization on image inputs.


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
white_boxinference_timetargeteddigital
Applications
large vision-language modelsmultimodal ai safety