benchmark 2025

JADES: A Universal Framework for Jailbreak Assessment via Decompositional Scoring

Junjie Chu 1, Mingjie Li 1, Ziqing Yang 1, Ye Leng 1, Chenhao Lin 2, Chao Shen 2, Michael Backes 1, Yun Shen , Yang Zhang 1

0 citations

α

Published on arXiv

2508.20848

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

JADES achieves 98.5% agreement with human evaluators on jailbreak success classification, outperforming strong baselines by over 9% and revealing that existing automated evaluators substantially overestimate attack success rates.

JADES

Novel technique introduced


Accurately determining whether a jailbreak attempt has succeeded is a fundamental yet unresolved challenge. Existing evaluation methods rely on misaligned proxy indicators or naive holistic judgments. They frequently misinterpret model responses, leading to inconsistent and subjective assessments that misalign with human perception. To address this gap, we introduce JADES (Jailbreak Assessment via Decompositional Scoring), a universal jailbreak evaluation framework. Its key mechanism is to automatically decompose an input harmful question into a set of weighted sub-questions, score each sub-answer, and weight-aggregate the sub-scores into a final decision. JADES also incorporates an optional fact-checking module to strengthen the detection of hallucinations in jailbreak responses. We validate JADES on JailbreakQR, a newly introduced benchmark proposed in this work, consisting of 400 pairs of jailbreak prompts and responses, each meticulously annotated by humans. In a binary setting (success/failure), JADES achieves 98.5% agreement with human evaluators, outperforming strong baselines by over 9%. Re-evaluating five popular attacks on four LLMs reveals substantial overestimation (e.g., LAA's attack success rate on GPT-3.5-Turbo drops from 93% to 69%). Our results show that JADES could deliver accurate, consistent, and interpretable evaluations, providing a reliable basis for measuring future jailbreak attacks.


Key Contributions

  • JADES decompositional scoring framework that breaks harmful questions into weighted sub-questions, scores each sub-answer separately, and aggregates into a final jailbreak success decision
  • JailbreakQR benchmark: 400 human-annotated jailbreak prompt-response pairs for evaluating jailbreak assessment methods
  • Re-evaluation of five popular jailbreak attacks on four LLMs showing substantial overestimation by prior automated evaluators (e.g., LAA's ASR on GPT-3.5-Turbo drops from 93% to 69%)

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
JailbreakQRAdvBench
Applications
large language modelschatbotsllm safety evaluation