benchmark 2025

Automating Deception: Scalable Multi-Turn LLM Jailbreaks

Adarsh Kumarappan 1, Ananya Mujoo 2

2 citations · 20 references · arXiv

α

Published on arXiv

2511.19517

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Conversational history increases GPT-family Attack Success Rates by up to 32 percentage points, while Gemini 2.5 Flash remains nearly immune, revealing critical divergences in how safety architectures handle narrative context.

FITD Multi-Turn Jailbreak Pipeline

Novel technique introduced


Multi-turn conversational attacks, which leverage psychological principles like Foot-in-the-Door (FITD), where a small initial request paves the way for a more significant one, to bypass safety alignments, pose a persistent threat to Large Language Models (LLMs). Progress in defending against these attacks is hindered by a reliance on manual, hard-to-scale dataset creation. This paper introduces a novel, automated pipeline for generating large-scale, psychologically-grounded multi-turn jailbreak datasets. We systematically operationalize FITD techniques into reproducible templates, creating a benchmark of 1,500 scenarios across illegal activities and offensive content. We evaluate seven models from three major LLM families under both multi-turn (with history) and single-turn (without history) conditions. Our results reveal stark differences in contextual robustness: models in the GPT family demonstrate a significant vulnerability to conversational history, with Attack Success Rates (ASR) increasing by as much as 32 percentage points. In contrast, Google's Gemini 2.5 Flash exhibits exceptional resilience, proving nearly immune to these attacks, while Anthropic's Claude 3 Haiku shows strong but imperfect resistance. These findings highlight a critical divergence in how current safety architectures handle conversational context and underscore the need for defenses that can resist narrative-based manipulation.


Key Contributions

  • Automated, reproducible pipeline generating 1,500 psychologically-grounded multi-turn jailbreak scenarios using FITD-based 5-turn escalation templates
  • Dual-track taxonomy distinguishing attack strategies for illegal activities vs. offensive content
  • Comprehensive evaluation of seven LLMs revealing that GPT-family models are significantly more vulnerable to conversational history (up to +32pp ASR) while Gemini 2.5 Flash proves nearly immune

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
XGuard-TrainSafeDialBenchHarmBenchJailbreakBench
Applications
llm safety alignmentconversational ai