defense 2026

Steering to Say No: Configurable Refusal via Activation Steering in Vision Language Models

Jiaxi Yang , Shicheng Liu , Yuchen Yang , Dongwon Lee

0 citations · 46 references · arXiv (Cornell University)

α

Published on arXiv

2602.07013

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

CR-VLM achieves effective, efficient, and configurable refusal across multiple VLMs and datasets, reducing both under-refusal and over-refusal compared to one-size-fits-all baselines.

CR-VLM

Novel technique introduced


With the rapid advancement of Vision Language Models (VLMs), refusal mechanisms have become a critical component for ensuring responsible and safe model behavior. However, existing refusal strategies are largely \textit{one-size-fits-all} and fail to adapt to diverse user needs and contextual constraints, leading to either under-refusal or over-refusal. In this work, we firstly explore the challenges mentioned above and develop \textbf{C}onfigurable \textbf{R}efusal in \textbf{VLM}s (\textbf{CR-VLM}), a robust and efficient approach for {\em configurable} refusal based on activation steering. CR-VLM consists of three integrated components: (1) extracting a configurable refusal vector via a teacher-forced mechanism to amplify the refusal signal; (2) introducing a gating mechanism that mitigates over-refusal by preserving acceptance for in-scope queries; and (3) designing a counterfactual vision enhancement module that aligns visual representations with refusal requirements. Comprehensive experiments across multiple datasets and various VLMs demonstrate that CR-VLM achieves effective, efficient, and robust configurable refusals, offering a scalable path toward user-adaptive safety alignment in VLMs.


Key Contributions

  • Configurable refusal vector extracted via teacher-forced mechanism to amplify and steer refusal signals in VLMs
  • Gating mechanism that preserves acceptance for in-scope queries, mitigating over-refusal
  • Counterfactual vision enhancement module that aligns visual representations with refusal requirements

🛡️ Threat Analysis


Details

Domains
visionnlpmultimodal
Model Types
vlmtransformer
Threat Tags
inference_time
Applications
vision-language modelscontent moderationsafety alignment