ADVERSA: Measuring Multi-Turn Guardrail Degradation and Judge Reliability in Large Language Models
Published on arXiv
2603.10068
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Across 15 multi-turn conversations with three frontier models, ADVERSA observed a 26.7% jailbreak rate with an average jailbreak round of 1.25, indicating successful jailbreaks concentrated in early rounds rather than accumulating through sustained multi-turn pressure.
ADVERSA
Novel technique introduced
Most adversarial evaluations of large language model (LLM) safety assess single prompts and report binary pass/fail outcomes, which fails to capture how safety properties evolve under sustained adversarial interaction. We present ADVERSA, an automated red-teaming framework that measures guardrail degradation dynamics as continuous per-round compliance trajectories rather than discrete jailbreak events. ADVERSA uses a fine-tuned 70B attacker model (ADVERSA-Red, Llama-3.1-70B-Instruct with QLoRA) that eliminates the attacker-side safety refusals that render off-the-shelf models unreliable as attackers, scoring victim responses on a structured 5-point rubric that treats partial compliance as a distinct measurable state. We report a controlled experiment across three frontier victim models (Claude Opus 4.6, Gemini 3.1 Pro, GPT-5.2) using a triple-judge consensus architecture in which judge reliability is measured as a first-class research outcome rather than assumed. Across 15 conversations of up to 10 adversarial rounds, we observe a 26.7% jailbreak rate with an average jailbreak round of 1.25, suggesting that in this evaluation setting, successful jailbreaks were concentrated in early rounds rather than accumulating through sustained pressure. We document inter-judge agreement rates, self-judge scoring tendencies, attacker drift as a failure mode in fine-tuned attackers deployed out of their training distribution, and attacker refusals as a previously-underreported confound in victim resistance measurement. All limitations are stated explicitly. Attack prompts are withheld per responsible disclosure policy; all other experimental artifacts are released.
Key Contributions
- Open-source multi-turn red-teaming infrastructure including a fine-tuned 70B attacker model (ADVERSA-Red via QLoRA) that eliminates attacker-side safety refusals and a structured 5-point compliance rubric treating partial compliance as a measurable state
- Triple-judge consensus architecture that measures inter-judge agreement and self-judge scoring tendencies as first-class research outcomes rather than assumptions
- Introduces guardrail degradation curves (continuous per-round compliance trajectories) and documents attacker drift and attacker refusals as previously underreported confounds in automated red-teaming pipelines