defense 2025

Rethinking Jailbreak Detection of Large Vision Language Models with Representational Contrastive Scoring

Peichun Hua 1, Hao Li 1, Shanghao Shi 1, Zhiyuan Yu 2, Ning Zhang 1

0 citations · 153 references · arXiv

α

Published on arXiv

2512.12069

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

MCD and KCD achieve state-of-the-art jailbreak detection on a mixed-modality evaluation protocol designed to test generalization to unseen attack types, while remaining lightweight enough for practical deployment

Representational Contrastive Scoring (RCS)

Novel technique introduced


Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both generalizable to novel threats and efficient for practical deployment. Many current strategies fall short, either targeting specific attack patterns, which limits generalization, or imposing high computational overhead. While lightweight anomaly-detection methods offer a promising direction, we find that their common one-class design tends to confuse novel benign inputs with malicious ones, leading to unreliable over-rejection. To address this, we propose Representational Contrastive Scoring (RCS), a framework built on a key insight: the most potent safety signals reside within the LVLM's own internal representations. Our approach inspects the internal geometry of these representations, learning a lightweight projection to maximally separate benign and malicious inputs in safety-critical layers. This enables a simple yet powerful contrastive score that differentiates true malicious intent from mere novelty. Our instantiations, MCD (Mahalanobis Contrastive Detection) and KCD (K-nearest Contrastive Detection), achieve state-of-the-art performance on a challenging evaluation protocol designed to test generalization to unseen attack types. This work demonstrates that effective jailbreak detection can be achieved by applying simple, interpretable statistical methods to the appropriate internal representations, offering a practical path towards safer LVLM deployment. Our code is available on Github https://github.com/sarendis56/Jailbreak_Detection_RCS.


Key Contributions

  • Proposes Representational Contrastive Scoring (RCS), a lightweight framework that mines safety-critical geometric signals from LVLM internal representations to distinguish malicious intent from mere input novelty
  • Introduces MCD (Mahalanobis Contrastive Detection) and KCD (K-nearest Contrastive Detection) as efficient instantiations that achieve state-of-the-art jailbreak detection generalizing to unseen attack types
  • Identifies the failure mode of one-class anomaly detection approaches (over-rejection of novel benign inputs) and addresses it with a contrastive two-class scoring design

🛡️ Threat Analysis

Input Manipulation Attack

The paper defends against adversarial visual inputs (adversarial images, typographical attacks, steganographic encodings) targeting VLMs — a core ML01 threat — and the detection framework operates at inference time against such input manipulation attacks.


Details

Domains
multimodalvisionnlp
Model Types
vlmllm
Threat Tags
inference_timewhite_box
Datasets
JailBreakVMM-SafetyBench
Applications
vision-language model safetymultimodal jailbreak detectionlvlm deployment