V-Attack: Targeting Disentangled Value Features for Controllable Adversarial Attacks on LVLMs
Sen Nie 1,2, Jie Zhang 1,2, Jianxin Yan 3, Shiguang Shan 1,2, Xilin Chen 1,2
Published on arXiv
2511.20223
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
V-Attack improves adversarial attack success rate by an average of 36% over state-of-the-art methods across diverse LVLMs by targeting disentangled value features instead of entangled patch-token representations.
V-Attack
Novel technique introduced
Adversarial attacks have evolved from simply disrupting predictions on conventional task-specific models to the more complex goal of manipulating image semantics on Large Vision-Language Models (LVLMs). However, existing methods struggle with controllability and fail to precisely manipulate the semantics of specific concepts in the image. We attribute this limitation to semantic entanglement in the patch-token representations on which adversarial attacks typically operate: global context aggregated by self-attention in the vision encoder dominates individual patch features, making them unreliable handles for precise local semantic manipulation. Our systematic investigation reveals a key insight: value features (V) computed within the transformer attention block serve as much more precise handles for manipulation. We show that V suppresses global-context channels, allowing it to retain high-entropy, disentangled local semantic information. Building on this discovery, we propose V-Attack, a novel method designed for precise local semantic attacks. V-Attack targets the value features and introduces two core components: (1) a Self-Value Enhancement module to refine V's intrinsic semantic richness, and (2) a Text-Guided Value Manipulation module that leverages text prompts to locate source concept and optimize it toward a target concept. By bypassing the entangled patch features, V-Attack achieves highly effective semantic control. Extensive experiments across diverse LVLMs, including LLaVA, InternVL, DeepseekVL and GPT-4o, show that V-Attack improves the attack success rate by an average of 36% over state-of-the-art methods, exposing critical vulnerabilities in modern visual-language understanding. Our code and data are available https://github.com/Summu77/V-Attack.
Key Contributions
- Identifies that value features (V) in transformer attention blocks retain high-entropy, disentangled local semantic information, making them more precise handles for adversarial manipulation than patch-token representations.
- Proposes V-Attack with a Self-Value Enhancement module to refine intrinsic semantic richness of V features, and a Text-Guided Value Manipulation module to locate and remap source concepts to target concepts via text prompts.
- Demonstrates a 36% average improvement in attack success rate over state-of-the-art methods across LLaVA, InternVL, DeepseekVL, and GPT-4o.
🛡️ Threat Analysis
V-Attack crafts adversarial perturbations on visual inputs using gradient-based optimization, targeting internal value (V) features of the vision encoder's transformer attention blocks to cause targeted semantic misrepresentation at inference time — a direct adversarial input manipulation attack.