Align to Misalign: Automatic LLM Jailbreak with Meta-Optimized LLM Judges
Hamin Koo 1, Minseon Kim 2, Jaehyung Kim 1
Published on arXiv
2511.01375
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves 88.0% ASR on Claude-3.5-Haiku and 100.0% ASR on Claude-4-Sonnet, outperforming existing optimization-based jailbreak baselines by substantial margins.
AMIS (Align to MISalign)
Novel technique introduced
Identifying the vulnerabilities of large language models (LLMs) is crucial for improving their safety by addressing inherent weaknesses. Jailbreaks, in which adversaries bypass safeguards with crafted input prompts, play a central role in red-teaming by probing LLMs to elicit unintended or unsafe behaviors. Recent optimization-based jailbreak approaches iteratively refine attack prompts by leveraging LLMs. However, they often rely heavily on either binary attack success rate (ASR) signals, which are sparse, or manually crafted scoring templates, which introduce human bias and uncertainty in the scoring outcomes. To address these limitations, we introduce AMIS (Align to MISalign), a meta-optimization framework that jointly evolves jailbreak prompts and scoring templates through a bi-level structure. In the inner loop, prompts are refined using fine-grained and dense feedback using a fixed scoring template. In the outer loop, the template is optimized using an ASR alignment score, gradually evolving to better reflect true attack outcomes across queries. This co-optimization process yields progressively stronger jailbreak prompts and more calibrated scoring signals. Evaluations on AdvBench and JBB-Behaviors demonstrate that AMIS achieves state-of-the-art performance, including 88.0% ASR on Claude-3.5-Haiku and 100.0% ASR on Claude-4-Sonnet, outperforming existing baselines by substantial margins.
Key Contributions
- Bi-level meta-optimization (AMIS) that jointly evolves jailbreak prompts in the inner loop and scoring templates in the outer loop, eliminating reliance on sparse binary ASR signals or manually crafted templates
- ASR alignment score that calibrates LLM judge feedback to progressively better reflect true attack outcomes across queries
- State-of-the-art jailbreak performance achieving 88.0% ASR on Claude-3.5-Haiku and 100.0% ASR on Claude-4-Sonnet on AdvBench and JBB-Behaviors