benchmark 2026

How Real is Your Jailbreak? Fine-grained Jailbreak Evaluation with Anchored Reference

Songyang Liu 1, Chao Li 1, Rui Pu 1, Litian Zhang 1, Chenxu Wang 1, Zejian Chen 1, Yuting Zhang 1, Yiming Hei 2

0 citations · 25 references · arXiv

α

Published on arXiv

2601.03288

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

FJAR achieves the highest alignment with human judgment among automated jailbreak evaluation methods, revealing that prior methods systematically overestimate attack success rate by ~27%

FJAR

Novel technique introduced


Jailbreak attacks present a significant challenge to the safety of Large Language Models (LLMs), yet current automated evaluation methods largely rely on coarse classifications that focus mainly on harmfulness, leading to substantial overestimation of attack success. To address this problem, we propose FJAR, a fine-grained jailbreak evaluation framework with anchored references. We first categorized jailbreak responses into five fine-grained categories: Rejective, Irrelevant, Unhelpful, Incorrect, and Successful, based on the degree to which the response addresses the malicious intent of the query. This categorization serves as the basis for FJAR. Then, we introduce a novel harmless tree decomposition approach to construct high-quality anchored references by breaking down the original queries. These references guide the evaluator in determining whether the response genuinely fulfills the original query. Extensive experiments demonstrate that FJAR achieves the highest alignment with human judgment and effectively identifies the root causes of jailbreak failures, providing actionable guidance for improving attack strategies.


Key Contributions

  • Five-category jailbreak response taxonomy (Rejective, Irrelevant, Unhelpful, Incorrect, Successful) replacing coarse binary/scoring classification
  • Harmless tree decomposition method to construct anchored references that verify whether responses genuinely fulfill original malicious queries
  • Demonstration that existing GPT-4-based evaluations overestimate jailbreak attack success rate by an average of 27%

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
llm safety evaluationjailbreak assessment