defense 2026

Robust Multimodal Safety via Conditional Decoding

Anurag Kumar 1,2, Raghuveer Peri 2, Jon Burnsky 2, Alexandru Nelus 2, Rohit Paturi 2, Srikanth Vishnubhotla 2, Yanjun Qi 2

0 citations

α

Published on arXiv

2604.00310

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Lowers average attack success rate by more than 97% across text, vision, and audio jailbreak attacks on MM-SafetyBench, JailbreakV-28k, and adversarial audio tests while maintaining utility

CASA (Classification Augmented with Safety Attention)

Novel technique introduced


Multimodal large-language models (MLLMs) often experience degraded safety alignment when harmful queries exploit cross-modal interactions. Models aligned on text alone show a higher rate of successful attacks when extended to two or more modalities. In this work, we propose a simple conditional decoding strategy, CASA (Classification Augmented with Safety Attention) that utilizes internal representations of MLLMs to predict a binary safety token before response generation. We introduce a novel safety attention module designed to enhance the model's ability to detect malicious queries. Our design ensures robust safety alignment without relying on any external classifier or auxiliary head, and without the need for modality-specific safety fine-tuning. On diverse benchmarks such as MM-SafetyBench, JailbreakV-28k, and adversarial audio tests, CASA lowers the average attack success rate by more than 97% across modalities and across attack types. Our empirical evaluations also show that CASA maintains strong utility in benign inputs, a result validated through both automated and human evaluations (via 13 trained annotators). Together, these results highlight CASA as a simple and generalizable framework to improve multimodal LLM safety.


Key Contributions

  • Novel conditional decoding strategy (CASA) that predicts binary safety token before generation using internal MLLM representations
  • Safety attention module that enhances malicious query detection without external classifiers or modality-specific fine-tuning
  • Reduces average attack success rate by >97% across text, image, and audio jailbreak attacks while maintaining utility on benign inputs

🛡️ Threat Analysis

Input Manipulation Attack

Defends against adversarial multimodal inputs designed to bypass safety alignment - including adversarial images (SD, Typography attacks), adversarial audio (spelled-out harmful words), and cross-modal evasion attacks.


Details

Domains
multimodalnlpvisionaudio
Model Types
llmvlmmultimodaltransformer
Threat Tags
inference_timeblack_box
Datasets
MM-SafetyBenchJailbreakV-28kHarmbenchSafetyBenchJailbreakBenchAlpacaMME
Applications
multimodal llm safetyjailbreak defensecross-modal attack mitigation