Adversarial Reinforcement Learning for Large Language Model Agent Safety
Zizhao Wang 1,2, Dingcheng Li 1, Vaishakh Keshava 3, Phillip Wallis 1, Ananth Balashankar 3, Peter Stone 2,4, Lukas Rutishauser 1
Published on arXiv
2510.05442
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Agents fine-tuned with ARLAS achieve significantly lower attack success rates than the base model while also improving task completion rates on BrowserGym and AgentDojo, with adversarial co-training producing more diverse and challenging injection attacks.
ARLAS
Novel technique introduced
Large Language Model (LLM) agents can leverage tools such as Google Search to complete complex tasks. However, this tool usage introduces the risk of indirect prompt injections, where malicious instructions hidden in tool outputs can manipulate the agent, posing security risks like data leakage. Current defense strategies typically rely on fine-tuning LLM agents on datasets of known attacks. However, the generation of these datasets relies on manually crafted attack patterns, which limits their diversity and leaves agents vulnerable to novel prompt injections. To address this limitation, we propose Adversarial Reinforcement Learning for Agent Safety (ARLAS), a novel framework that leverages adversarial reinforcement learning (RL) by formulating the problem as a two-player zero-sum game. ARLAS co-trains two LLMs: an attacker that learns to autonomously generate diverse prompt injections and an agent that learns to defend against them while completing its assigned tasks. To ensure robustness against a wide range of attacks and to prevent cyclic learning, we employ a population-based learning framework that trains the agent to defend against all previous attacker checkpoints. Evaluated on BrowserGym and AgentDojo, agents fine-tuned with ARLAS achieve a significantly lower attack success rate than the original model while also improving their task success rate. Our analysis further confirms that the adversarial process generates a diverse and challenging set of attacks, leading to a more robust agent compared to the base model.
Key Contributions
- ARLAS framework that formulates indirect prompt injection defense as a two-player zero-sum adversarial RL game, co-training an attacker LLM and a defender agent LLM
- Population-based learning scheme that trains the agent against all previous attacker checkpoints to prevent cyclic learning and improve robustness against novel injections
- Empirical validation on BrowserGym and AgentDojo showing ARLAS reduces attack success rate while simultaneously improving task success rate over base models