defense 2026

Two Birds, One Projection: Harmonizing Safety and Utility in LVLMs via Inference-time Feature Projection

Yewon Han 1, Yumin Seol 1, EunGyung Kong 2, Minsoo Jo 1, Taesup Kim 1

0 citations

α

Published on arXiv

2603.14825

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Simultaneously improves both safety against jailbreaks and utility on general visual-grounded reasoning tasks through single-pass feature projection

TBOP (Two Birds, One Projection)

Novel technique introduced


Existing jailbreak defence frameworks for Large Vision-Language Models often suffer from a safety utility tradeoff, where strengthening safety inadvertently degrades performance on general visual-grounded reasoning tasks. In this work, we investigate whether safety and utility are inherently antagonistic objectives. We focus on a modality induced bias direction consistently observed across datasets, which arises from suboptimal coupling between the Large Language Model backbone and visual encoders. We further demonstrate that this direction undermines performance on both tasks. Leveraging this insight, we propose Two Birds, One Projection, an efficient inference time jailbreak defence that projects cross-modal features onto the null space of the identified bias direction to remove the corresponding components. Requiring only a single forward pass, our method effectively breaks the conventional tradeoff, simultaneously improving both safety and utility across diverse benchmarks.


Key Contributions

  • Identifies a modality-induced bias direction in LVLMs arising from suboptimal cross-modal coupling that degrades both safety and utility
  • Proposes TBOP, an efficient single-forward-pass inference-time defense via null-space projection
  • Breaks the conventional safety-utility tradeoff, improving both jailbreak defense and general visual reasoning performance

🛡️ Threat Analysis

Input Manipulation Attack

Defends against adversarial multimodal inputs (jailbreak attacks on LVLMs that manipulate visual inputs to bypass safety guardrails) at inference time.


Details

Domains
multimodalvisionnlp
Model Types
vlmllmmultimodal
Threat Tags
inference_timedigital
Applications
vision-language modelsmultimodal ai safety