defense 2025

DefenSee: Dissecting Threat from Sight and Text -- A Multi-View Defensive Pipeline for Multi-modal Jailbreaks

Zihao Wang , Kar Wai Fok , Vrizlynn L. L. Thing

0 citations · 33 references · arXiv

α

Published on arXiv

2512.01185

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Reduces multi-modal jailbreak attack success rate to below 1.70% on MiniGPT4 using MM-SafetyBench, significantly outperforming prior SOTA defenses under identical conditions.

DefenSee

Novel technique introduced


Multi-modal large language models (MLLMs), capable of processing text, images, and audio, have been widely adopted in various AI applications. However, recent MLLMs integrating images and text remain highly vulnerable to coordinated jailbreaks. Existing defenses primarily focus on the text, lacking robust multi-modal protection. As a result, studies indicate that MLLMs are more susceptible to malicious or unsafe instructions, unlike their text-only counterparts. In this paper, we proposed DefenSee, a robust and lightweight multi-modal black-box defense technique that leverages image variants transcription and cross-modal consistency checks, mimicking human judgment. Experiments on popular multi-modal jailbreak and benign datasets show that DefenSee consistently enhances MLLM robustness while better preserving performance on benign tasks compared to SOTA defenses. It reduces the ASR of jailbreak attacks to below 1.70% on MiniGPT4 using the MM-SafetyBench benchmark, significantly outperforming prior methods under the same conditions.


Key Contributions

  • Image variants transcription pipeline that extracts primary visual elements and foreground semantics to enable robust human-like analysis of potentially adversarial images
  • Cross-modal consistency checking module that compares extracted image content against user text+image prompts to detect adversarial manipulations or malicious intent
  • Black-box, inference-time defense applicable to any MLLM without retraining, reducing jailbreak ASR to below 1.70% on MiniGPT4 with MM-SafetyBench

🛡️ Threat Analysis

Input Manipulation Attack

DefenSee explicitly defends against adversarial visual inputs to VLMs — including gradient-based perturbation attacks (e.g., visual adversarial triggers, Jailbreak in Pieces) that manipulate image inputs to bypass safety alignment at inference time.


Details

Domains
visionnlpmultimodal
Model Types
vlmllmmultimodal
Threat Tags
black_boxinference_time
Datasets
MM-SafetyBench
Applications
multimodal llmsvision-language modelsai chatbots