NegBLEURT Forest: Leveraging Inconsistencies for Detecting Jailbreak Attacks
Lama Sleem 1, Jerome Francois 1, Lujun Li 1, Nathan Foucher 2, Niccolo Gentile 3, Radu State 1
Published on arXiv
2511.11784
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
NegBLEURT Forest consistently ranks first or second in jailbreak detection accuracy across diverse LLMs while competing methods show notable sensitivity to model and data variation.
NegBLEURT Forest
Novel technique introduced
Jailbreak attacks designed to bypass safety mechanisms pose a serious threat by prompting LLMs to generate harmful or inappropriate content, despite alignment with ethical guidelines. Crafting universal filtering rules remains difficult due to their inherent dependence on specific contexts. To address these challenges without relying on threshold calibration or model fine-tuning, this work introduces a semantic consistency analysis between successful and unsuccessful responses, demonstrating that a negation-aware scoring approach captures meaningful patterns. Building on this insight, a novel detection framework called NegBLEURT Forest is proposed to evaluate the degree of alignment between outputs elicited by adversarial prompts and expected safe behaviors. It identifies anomalous responses using the Isolation Forest algorithm, enabling reliable jailbreak detection. Experimental results show that the proposed method consistently achieves top-tier performance, ranking first or second in accuracy across diverse models using the crafted dataset, while competing approaches exhibit notable sensitivity to model and data variations.
Key Contributions
- Semantic consistency analysis revealing distinguishable patterns between successful and failed jailbreak responses without threshold calibration or fine-tuning
- NegBLEURT Forest detection framework combining negation-aware BLEURT scoring with Isolation Forest anomaly detection to identify successful jailbreaks
- Empirical demonstration that the method ranks first or second in accuracy across diverse LLMs, outperforming threshold-sensitive baselines like SmoothLLM and JailGuard