defense arXiv Mar 16, 2026 · 21d ago
Yewon Han, Yumin Seol, EunGyung Kong et al. · Seoul National University · Mobilint
Inference-time defense that projects LVLM cross-modal features to simultaneously improve jailbreak robustness and general task performance
Input Manipulation Attack Prompt Injection multimodalvisionnlp
Existing jailbreak defence frameworks for Large Vision-Language Models often suffer from a safety utility tradeoff, where strengthening safety inadvertently degrades performance on general visual-grounded reasoning tasks. In this work, we investigate whether safety and utility are inherently antagonistic objectives. We focus on a modality induced bias direction consistently observed across datasets, which arises from suboptimal coupling between the Large Language Model backbone and visual encoders. We further demonstrate that this direction undermines performance on both tasks. Leveraging this insight, we propose Two Birds, One Projection, an efficient inference time jailbreak defence that projects cross-modal features onto the null space of the identified bias direction to remove the corresponding components. Requiring only a single forward pass, our method effectively breaks the conventional tradeoff, simultaneously improving both safety and utility across diverse benchmarks.
vlm llm multimodal Seoul National University · Mobilint