attack 2025

Large Reasoning Models Are Autonomous Jailbreak Agents

Thilo Hagendorff 1, Erik Derner 2, Nuria Oliver 2

0 citations

α

Published on arXiv

2508.04039

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Four LRMs acting as autonomous adversaries achieved a 97.14% overall attack success rate across nine target models using multi-turn persuasive conversations with no further supervision after the initial system prompt.

Autonomous LRM Jailbreak Agent

Novel technique introduced


Jailbreaking -- bypassing built-in safety mechanisms in AI models -- has traditionally required complex technical procedures or specialized human expertise. In this study, we show that the persuasive capabilities of large reasoning models (LRMs) simplify and scale jailbreaking, converting it into an inexpensive activity accessible to non-experts. We evaluated the capabilities of four LRMs (DeepSeek-R1, Gemini 2.5 Flash, Grok 3 Mini, Qwen3 235B) to act as autonomous adversaries conducting multi-turn conversations with nine widely used target models. LRMs received instructions via a system prompt, before proceeding to planning and executing jailbreaks with no further supervision. We performed extensive experiments with a benchmark of harmful prompts composed of 70 items covering seven sensitive domains. This setup yielded an overall attack success rate across all model combinations of 97.14%. Our study reveals an alignment regression, in which LRMs can systematically erode the safety guardrails of other models, highlighting the urgent need to further align frontier models not only to resist jailbreak attempts, but also to prevent them from being co-opted into acting as jailbreak agents.


Key Contributions

  • Demonstrates that LRMs (DeepSeek-R1, Gemini 2.5 Flash, Grok 3 Mini, Qwen3 235B) can autonomously conduct multi-turn jailbreak attacks against nine widely-used target models with no human supervision beyond an initial system prompt
  • Reveals an 'alignment regression' phenomenon where LRMs systematically erode the safety guardrails of other frontier models, achieving an overall 97.14% attack success rate across all model combinations
  • Shows that LRM-driven jailbreaking converts a previously expert-requiring activity into an inexpensive, scalable, non-expert-accessible attack

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
Custom benchmark of 70 harmful prompts across 7 sensitive domains
Applications
llm safety systemsai chatbotsfrontier language models