VISOR: Visual Input-based Steering for Output Redirection in Vision-Language Models
Mansi Phute 1,2, Ravikumar Balakrishnan 2
Published on arXiv
2508.08521
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
A single adversarial steering image achieves up to 25% behavioral shift on VLM alignment tasks, exceeding activation steering vectors for negative steering and dramatically outperforming system prompting (3-4%), while maintaining 99.9% performance on 14,000 unrelated MMLU tasks.
VISOR
Novel technique introduced
Vision Language Models (VLMs) are increasingly being used in a broad range of applications, bringing their security and behavioral control to the forefront. While existing approaches for behavioral control or output redirection, like system prompting in VLMs, are easily detectable and often ineffective, activation-based steering vectors require invasive runtime access to model internals--incompatible with API-based services and closed-source deployments. We introduce VISOR (Visual Input-based Steering for Output Redirection), a novel method that achieves sophisticated behavioral control through optimized visual inputs alone. By crafting universal steering images that induce target activation patterns, VISOR enables practical deployment across all VLM serving modalities while remaining imperceptible compared to explicit textual instructions. We validate VISOR on LLaVA-1.5-7B across three critical alignment tasks: refusal, sycophancy and survival instinct. A single 150KB steering image matches steering vector performance within 1-2% for positive behavioral shifts while dramatically exceeding it for negative steering--achieving up to 25% shifts from baseline compared to steering vectors' modest changes. Unlike system prompting (3-4% shifts), VISOR provides robust bidirectional control while maintaining 99.9% performance on 14,000 unrelated MMLU tasks. Beyond eliminating runtime overhead and model access requirements, VISOR exposes a critical security vulnerability: adversaries can achieve sophisticated behavioral manipulation through visual channels alone, bypassing text-based defenses. Our work fundamentally re-imagines multimodal model control and highlights the urgent need for defenses against visual steering attacks.
Key Contributions
- Proposes VISOR, a method to achieve VLM behavioral control through optimized steering images that induce target activation patterns without requiring runtime model internals access.
- Demonstrates that a single 150KB adversarial image achieves up to 25% behavioral shift (refusal, sycophancy, survival instinct) — far exceeding system prompting (3-4%) and matching activation steering vectors.
- Exposes a critical security vulnerability: visual-channel-only attacks can achieve sophisticated VLM alignment manipulation while bypassing text-based defenses and preserving 99.9% unrelated-task performance.
🛡️ Threat Analysis
VISOR crafts optimized adversarial visual inputs (steering images) that induce specific target activation patterns in VLMs, constituting an input manipulation attack at inference time — the images are adversarially crafted against the model's internals to force behavioral shifts.