David vs. Goliath: Verifiable Agent-to-Agent Jailbreaking via Reinforcement Learning
Samuel Nellessen , Tal Kachman
Published on arXiv
2602.02395
Prompt Injection
OWASP LLM Top 10 — LLM01
Excessive Agency
OWASP LLM Top 10 — LLM08
Key Finding
Slingshot achieves 67.0% attack success rate against Qwen2.5-32B-Instruct-AWQ (vs. 1.7% baseline) and transfers zero-shot to Gemini 2.5 Flash (56.0%) and Meta-SecAlign-8B (39.2%).
Slingshot
Novel technique introduced
The evolution of large language models into autonomous agents introduces adversarial failures that exploit legitimate tool privileges, transforming safety evaluation in tool-augmented environments from a subjective NLP task into an objective control problem. We formalize this threat model as Tag-Along Attacks: a scenario where a tool-less adversary "tags along" on the trusted privileges of a safety-aligned Operator to induce prohibited tool use through conversation alone. To validate this threat, we present Slingshot, a 'cold-start' reinforcement learning framework that autonomously discovers emergent attack vectors, revealing a critical insight: in our setting, learned attacks tend to converge to short, instruction-like syntactic patterns rather than multi-turn persuasion. On held-out extreme-difficulty tasks, Slingshot achieves a 67.0% success rate against a Qwen2.5-32B-Instruct-AWQ Operator (vs. 1.7% baseline), reducing the expected attempts to first success (on solved tasks) from 52.3 to 1.3. Crucially, Slingshot transfers zero-shot to several model families, including closed-source models like Gemini 2.5 Flash (56.0% attack success rate) and defensive-fine-tuned open-source models like Meta-SecAlign-8B (39.2% attack success rate). Our work establishes Tag-Along Attacks as a first-class, verifiable threat model and shows that effective agentic attacks can be elicited from off-the-shelf open-weight models through environment interaction alone.
Key Contributions
- Formalizes Tag-Along Attacks as a verifiable threat model for multi-agent LLM systems, where a tool-less adversary hijacks a trusted Operator's tool privileges through conversation alone
- Proposes Slingshot, a cold-start RL framework that autonomously discovers emergent agentic jailbreaks without human-crafted examples, achieving 67.0% attack success on extreme-difficulty tasks vs. 1.7% baseline
- Demonstrates that learned attacks converge to short, instruction-like syntactic patterns (not multi-turn persuasion) and transfer zero-shot to closed-source models (Gemini 2.5 Flash: 56.0%) and defensive-fine-tuned models (Meta-SecAlign-8B: 39.2%)