Activation Steering Meets Preference Optimization: Defense Against Jailbreaks in Vision Language Models
Sihao Wu 1, Gaojie Jin 2, Wei Huang 3, Jianhong Wang 4, Xiaowei Huang 1
Published on arXiv
2509.00373
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
SPO-VLM improves safety against jailbreak and adversarial attacks on VLMs while preserving visual understanding capabilities compared to prior activation-steering baselines.
SPO-VLM
Novel technique introduced
Vision Language Models (VLMs) have demonstrated impressive capabilities in integrating visual and textual information for understanding and reasoning, but remain highly vulnerable to adversarial attacks. While activation steering has emerged as a promising defence, existing approaches often rely on task-specific contrastive prompts to extract harmful directions, which exhibit suboptimal performance and can degrade visual grounding performance. To address these limitations, we propose \textit{Sequence-Level Preference Optimization} for VLM (\textit{SPO-VLM}), a novel two-stage defense framework that combines activation-level intervention with policy-level optimization to enhance model robustness. In \textit{Stage I}, we compute adaptive layer-specific steering vectors from diverse data sources, enabling generalized suppression of harmful behaviors during inference. In \textit{Stage II}, we refine these steering vectors through a sequence-level preference optimization process. This stage integrates automated toxicity assessment, as well as visual-consistency rewards based on caption-image alignment, to achieve safe and semantically grounded text generation. The two-stage structure of SPO-VLM balances efficiency and effectiveness by combining a lightweight mitigation foundation in Stage I with deeper policy refinement in Stage II. Extensive experiments shown SPO-VLM enhances safety against attacks via activation steering and preference optimization, while maintaining strong performance on benign tasks without compromising visual understanding capabilities. We will release our code, model weights, and evaluation toolkit to support reproducibility and future research. \textcolor{red}{Warning: This paper may contain examples of offensive or harmful text and images.}
Key Contributions
- Stage I: Adaptive layer-specific activation steering vectors derived from diverse data sources to generalize suppression of harmful behaviors at inference time
- Stage II: Sequence-level preference optimization refining steering vectors using automated toxicity assessment and caption-image visual-consistency rewards
- Two-stage SPO-VLM framework that enhances jailbreak robustness without degrading visual grounding or benign task performance
🛡️ Threat Analysis
The paper explicitly defends against adversarial visual inputs to VLMs that cause harmful or jailbroken outputs — adversarial visual attacks on VLMs fall under ML01 per the dual-tagging rule for multimodal adversarial attacks.