PRISM: Robust VLM Alignment with Principled Reasoning for Integrated Safety in Multimodality
Nanxi Li , Zhengyue Zhao , Chaowei Xiao
Published on arXiv
2508.18649
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves 0.15% attack success rate on JailbreakV-28K for Qwen2-VL and a 90% improvement over the prior best method on VLBreakBench for LLaVA-1.5 while preserving model utility.
PRISM
Novel technique introduced
Safeguarding vision-language models (VLMs) is a critical challenge, as existing methods often suffer from over-defense, which harms utility, or rely on shallow alignment, failing to detect complex threats that require deep reasoning. To this end, we introduce PRISM (Principled Reasoning for Integrated Safety in Multimodality), a system2-like framework that aligns VLMs by embedding a structured, safety-aware reasoning process. Our framework consists of two key components: PRISM-CoT, a dataset that teaches safety-aware chain-of-thought reasoning, and PRISM-DPO, generated via Monte Carlo Tree Search (MCTS) to further refine this reasoning through Direct Preference Optimization to help obtain a delicate safety boundary. Comprehensive evaluations demonstrate PRISM's effectiveness, achieving remarkably low attack success rates including 0.15% on JailbreakV-28K for Qwen2-VL and 90% improvement over the previous best method on VLBreak for LLaVA-1.5. PRISM also exhibits strong robustness against adaptive attacks, significantly increasing computational costs for adversaries, and generalizes effectively to out-of-distribution challenges, reducing attack success rates to just 8.70% on the challenging multi-image MIS benchmark. Remarkably, this robust defense is achieved while preserving, and in some cases enhancing, model utility. To promote reproducibility, we have made our code, data, and model weights available at https://github.com/SaFoLab-WISC/PRISM.
Key Contributions
- PRISM-CoT: a safety-aware chain-of-thought dataset with four structured reasoning stages (Problem, Caption, Reasoning, Output) that teaches VLMs to explicitly identify multimodal safety violations
- PRISM-DPO: a preference optimization dataset generated via Monte Carlo Tree Search providing step-level preference pairs to refine the safety boundary via Direct Preference Optimization
- Addresses the under-studied cross-modal combination attack category where neither text nor image is individually harmful but their combination triggers unsafe outputs