attack 2026

SGHA-Attack: Semantic-Guided Hierarchical Alignment for Transferable Targeted Attacks on Vision-Language Models

Haobo Wang 1, Weiqi Luo 1, Xiaojun Jia 2, Xiaochun Cao 1

0 citations · 55 references · arXiv (Cornell University)

α

Published on arXiv

2602.01574

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SGHA-Attack achieves stronger targeted transferability to open-source and commercial black-box VLMs than prior methods and remains effective under preprocessing and purification defenses.

SGHA-Attack

Novel technique introduced


Large vision-language models (VLMs) are vulnerable to transfer-based adversarial perturbations, enabling attackers to optimize on surrogate models and manipulate black-box VLM outputs. Prior targeted transfer attacks often overfit surrogate-specific embedding space by relying on a single reference and emphasizing final-layer alignment, which underutilizes intermediate semantics and degrades transfer across heterogeneous VLMs. To address this, we propose SGHA-Attack, a Semantic-Guided Hierarchical Alignment framework that adopts multiple target references and enforces intermediate-layer consistency. Concretely, we generate a visually grounded reference pool by sampling a frozen text-to-image model conditioned on the target prompt, and then carefully select the Top-K most semantically relevant anchors under the surrogate to form a weighted mixture for stable optimization guidance. Building on these anchors, SGHA-Attack injects target semantics throughout the feature hierarchy by aligning intermediate visual representations at both global and spatial granularities across multiple depths, and by synchronizing intermediate visual and textual features in a shared latent subspace to provide early cross-modal supervision before the final projection. Extensive experiments on open-source and commercial black-box VLMs show that SGHA-Attack achieves stronger targeted transferability than prior methods and remains robust under preprocessing and purification defenses.


Key Contributions

  • Semantic-Guided Anchor Injection (SGAI): generates a visually grounded reference pool from a frozen T2I model and selects Top-K semantically relevant anchors as a weighted mixture optimization target to avoid single-reference overfitting.
  • Hierarchical Visual Structure Alignment (HVSA): aligns intermediate visual features at both global and spatial granularities across multiple encoder depths, reducing surrogate-specific overfitting.
  • Cross-Modal Latent Space Synchronization (CLSS): projects intermediate visual and textual features into a shared latent subspace for early cross-modal supervision before the final encoder projection.

🛡️ Threat Analysis

Input Manipulation Attack

Proposes gradient-based adversarial perturbations (L∞-constrained) optimized on surrogate VLMs to transfer targeted misclassification/caption hijacking to black-box VLMs — a classic input manipulation attack at inference time.


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
white_boxblack_boxinference_timetargeteddigital
Applications
vision-language modelsimage captioningvlm safety guardrails