benchmark 2026

Jailbreak Foundry: From Papers to Runnable Attacks for Reproducible Benchmarking

Zhicheng Fang 1, Jingjie Zheng 1,2, Chenxu Fu 1, Wei Xu 1,3

0 citations

α

Published on arXiv

2602.24009

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Reproduces 30 jailbreak attacks with a mean ASR deviation of only +0.26 percentage points from reported values, enabling standardized cross-attack and cross-model comparison at scale

Jailbreak Foundry (JBF)

Novel technique introduced


Jailbreak techniques for large language models (LLMs) evolve faster than benchmarks, making robustness estimates stale and difficult to compare across papers due to drift in datasets, harnesses, and judging protocols. We introduce JAILBREAK FOUNDRY (JBF), a system that addresses this gap via a multi-agent workflow to translate jailbreak papers into executable modules for immediate evaluation within a unified harness. JBF features three core components: (i) JBF-LIB for shared contracts and reusable utilities; (ii) JBF-FORGE for the multi-agent paper-to-module translation; and (iii) JBF-EVAL for standardizing evaluations. Across 30 reproduced attacks, JBF achieves high fidelity with a mean (reproduced-reported) attack success rate (ASR) deviation of +0.26 percentage points. By leveraging shared infrastructure, JBF reduces attack-specific implementation code by nearly half relative to original repositories and achieves an 82.5% mean reused-code ratio. This system enables a standardized AdvBench evaluation of all 30 attacks across 10 victim models using a consistent GPT-4o judge. By automating both attack integration and standardized evaluation, JBF offers a scalable solution for creating living benchmarks that keep pace with the rapidly shifting security landscape.


Key Contributions

  • Multi-agent paper-to-module translation system (JBF-FORGE) that converts jailbreak papers into executable, harness-compatible attack modules in ~28 minutes on average with no manual implementation effort
  • Shared library (JBF-LIB) providing stable attack/defense contracts and reusable utilities, reducing attack-specific code by ~42% and achieving 82.5% mean code reuse ratio
  • Standardized evaluation harness (JBF-EVAL) that benchmarks 30 reproduced attacks across 10 victim LLMs with a consistent GPT-4o judge on AdvBench, enabling directly comparable robustness estimates

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
AdvBench
Applications
large language model safety evaluationjailbreak reproducibilityllm robustness benchmarking