defense 2026

Diagnosing and Repairing Unsafe Channels in Vision-Language Models via Causal Discovery and Dual-Modal Safety Subspace Projection

Jinhu Fu 1,2, Yihang Lou 3, Qingyi Si 3, Shudong Zhang 3, Yan Bai 4, Sen Su 1,2

0 citations

α

Published on arXiv

2603.27240

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Significantly enhances safety robustness across multiple benchmarks without degrading general multimodal capabilities, outperforming activation steering and alignment baselines with good transferability to unseen attacks

CARE

Novel technique introduced


Large Vision-Language Models (LVLMs) have achieved impressive performance across multimodal understanding and reasoning tasks, yet their internal safety mechanisms remain opaque and poorly controlled. In this work, we present a comprehensive framework for diagnosing and repairing unsafe channels within LVLMs (CARE). We first perform causal mediation analysis to identify neurons and layers that are causally responsible for unsafe behaviors. Based on these findings, we introduce a dual-modal safety subspace projection method that learns generalized safety subspaces for both visual and textual modalities through generalized eigen-decomposition between benign and malicious activations. During inference, activations are dynamically projected toward these safety subspaces via a hybrid fusion mechanism that adaptively balances visual and textual corrections, effectively suppressing unsafe features while preserving semantic fidelity. Extensive experiments on multiple safety benchmarks demonstrate that our causal-subspace repair framework significantly enhances safety robustness without degrading general multimodal capabilities, outperforming prior activation steering and alignment-based baselines. Additionally, our method exhibits good transferability, defending against unseen attacks.


Key Contributions

  • Causal mediation analysis framework to identify neurons and layers responsible for unsafe VLM behaviors
  • Dual-modal safety subspace projection method using generalized eigen-decomposition to learn safety subspaces for visual and textual modalities
  • Hybrid fusion mechanism that adaptively balances visual and textual safety corrections while preserving semantic fidelity

🛡️ Threat Analysis

Input Manipulation Attack

Defends against adversarial inputs (malicious visual and textual inputs) that trigger unsafe VLM behaviors by projecting activations toward safety subspaces at inference time.


Details

Domains
multimodalvisionnlp
Model Types
vlmmultimodaltransformer
Threat Tags
inference_timemultimodal
Applications
vision-language modelsmultimodal ai safety