defense 2026

Principled Steering via Null-space Projection for Jailbreak Defense in Vision-Language Models

Xingyu Zhu 1,2, Beier Zhu 1,2, Shuo Wang 1,2, Junfeng Fang , Kesen Zhao 3, Hanwang Zhang 3, Xiangnan He 1,2

0 citations

α

Published on arXiv

2603.22094

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Reduces average attack success rate by over 15% on MiniGPT-4 under various jailbreak attacks while maintaining comparable performance to the original model on general benchmarks

NullSteer

Novel technique introduced


As vision-language models (VLMs) are increasingly deployed in open-world scenarios, they can be easily induced by visual jailbreak attacks to generate harmful content, posing serious risks to model safety and trustworthy usage. Recent activation steering methods inject directional vectors into model activations during inference to induce refusal behaviors and have demonstrated effectiveness. However, a steering vector may both enhance refusal ability and cause over-refusal, thereby degrading model performance on benign inputs. Moreover, due to the lack of theoretical interpretability, these methods still suffer from limited robustness and effectiveness. To better balance safety and utility, we propose NullSteer, a null-space projected activation defense framework. Our method constructs refusal directions within model activations through a linear transformation: it maintains zero perturbation within the benign subspace while dynamically inducing refusal along potentially harmful directions, thereby theoretically achieving safety enhancement without impairing the model's general capabilities. Extensive experiments show that NullSteer significantly reduces harmful outputs under various jailbreak attacks (average ASR reduction over 15 percent on MiniGPT-4) while maintaining comparable performance to the original model on general benchmarks.


Key Contributions

  • Null-space projection framework that steers VLM activations to induce refusal on harmful inputs while maintaining zero perturbation on benign subspace
  • Theoretically-grounded approach balancing safety (jailbreak defense) and utility (preserving general capabilities)
  • Reduces average attack success rate by over 15% on MiniGPT-4 across various jailbreak attacks while maintaining comparable performance on general benchmarks

🛡️ Threat Analysis

Input Manipulation Attack

Defends against visual jailbreak attacks (adversarial visual inputs designed to manipulate VLM outputs at inference time).


Details

Domains
multimodalvisionnlp
Model Types
vlmmultimodaltransformer
Threat Tags
inference_timetargeted
Applications
vision-language modelsjailbreak defensesafe vlm deployment