benchmark 2025

The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections

Milad Nasr 1, Nicholas Carlini 2, Chawin Sitawarin 3, Sander V. Schulhoff 4,5, Jamie Hayes 1, Michael Ilie 4,6, Juliette Pluto 3, Shuang Song 7, Harsh Chaudhari 1, Ilia Shumailov 3, Abhradeep Thakurta 3, Kai Yuanqing Xiao 8, Andreas Terzis 3, Florian Tramèr 3

34 citations · 4 influential · 86 references · arXiv

α

Published on arXiv

2510.09023

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

All 12 evaluated LLM defenses, which originally claimed near-zero attack success rates, are bypassed at >90% ASR when attackers adaptively tune general optimization techniques to counter each defense's specific design.


How should we evaluate the robustness of language model defenses? Current defenses against jailbreaks and prompt injections (which aim to prevent an attacker from eliciting harmful knowledge or remotely triggering malicious actions, respectively) are typically evaluated either against a static set of harmful attack strings, or against computationally weak optimization methods that were not designed with the defense in mind. We argue that this evaluation process is flawed. Instead, we should evaluate defenses against adaptive attackers who explicitly modify their attack strategy to counter a defense's design while spending considerable resources to optimize their objective. By systematically tuning and scaling general optimization techniques-gradient descent, reinforcement learning, random search, and human-guided exploration-we bypass 12 recent defenses (based on a diverse set of techniques) with attack success rate above 90% for most; importantly, the majority of defenses originally reported near-zero attack success rates. We believe that future defense work must consider stronger attacks, such as the ones we describe, in order to make reliable and convincing claims of robustness.


Key Contributions

  • Shows that 12 recent LLM jailbreak/prompt-injection defenses (originally reporting near-zero attack success rates) are broken with >90% ASR by adaptive attackers who tune and scale existing optimization methods to each defense's design.
  • Provides a systematic adaptive attack evaluation framework combining gradient descent, reinforcement learning, random search, and human-guided exploration.
  • Argues that defense evaluation in the LLM era has regressed relative to adversarial examples research by neglecting strong adaptive attackers, and establishes principles for credible robustness claims.

🛡️ Threat Analysis

Input Manipulation Attack

Employs gradient-based adversarial suffix optimization (token-level perturbations, e.g., GCG-style attacks adapted per defense) to break LLM defenses — adversarial input manipulation at inference time.


Details

Domains
nlp
Model Types
llm
Threat Tags
white_boxblack_boxinference_timetargeted
Applications
llm jailbreak defensesprompt injection defenses