benchmark 2025

Confusion is the Final Barrier: Rethinking Jailbreak Evaluation and Investigating the Real Misuse Threat of LLMs

Yu Yan , Sheng Sun , Zhe Wang , Yijun Lin , Zenghao Duan , zhifei zheng , Min Liu , Zhiyi yin , Jianping Zhang

0 citations

α

Published on arXiv

2508.16347

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Jailbreak success rates are systematically decoupled from actual harmful knowledge: LLMs often generate toxic-sounding but factually empty responses, and LLM judges mis-classify them as genuinely dangerous due to linguistic cues rather than content accuracy.

VENOM

Novel technique introduced


With the development of Large Language Models (LLMs), numerous efforts have revealed their vulnerabilities to jailbreak attacks. Although these studies have driven the progress in LLMs' safety alignment, it remains unclear whether LLMs have internalized authentic knowledge to deal with real-world crimes, or are merely forced to simulate toxic language patterns. This ambiguity raises concerns that jailbreak success is often attributable to a hallucination loop between jailbroken LLM and judger LLM. By decoupling the use of jailbreak techniques, we construct knowledge-intensive Q\&A to investigate the misuse threats of LLMs in terms of dangerous knowledge possession, harmful task planning utility, and harmfulness judgment robustness. Experiments reveal a mismatch between jailbreak success rates and harmful knowledge possession in LLMs, and existing LLM-as-a-judge frameworks tend to anchor harmfulness judgments on toxic language patterns. Our study reveals a gap between existing LLM safety assessments and real-world threat potential.


Key Contributions

  • Demonstrates a systematic mismatch between jailbreak Attack Success Rates (ASR) and LLMs' actual dangerous knowledge possession, showing ASR is an unreliable safety metric.
  • Proposes VENOM, a framework with knowledge-grounded Q&A and counterfactual task testing to evaluate LLMs' genuine criminal capacity independent of jailbreak prompt style.
  • Reveals that LLM-as-a-judge frameworks are biased toward surface-level toxic language patterns rather than factual harmfulness, creating a 'hallucination loop' that inflates perceived jailbreak risk.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
AdvBench
Applications
llm safety evaluationjailbreak assessmentred-teaming