Tree-based Dialogue Reinforced Policy Optimization for Red-Teaming Attacks
Ruohao Guo 1, Afshin Oroojlooy 2, Roshan Sridhar 2, Miguel Ballesteros 2, Alan Ritter 1, Dan Roth 2,3
Published on arXiv
2510.02286
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
DialTree-RPO achieves 81.5% average attack success rate across 12 target LLMs, outperforming prior state-of-the-art multi-turn red-teaming approaches by 44.2% ASR.
DialTree-RPO
Novel technique introduced
Despite recent rapid progress in AI safety, current large language models remain vulnerable to adversarial attacks in multi-turn interaction settings, where attackers strategically adapt their prompts across conversation turns and pose a more critical yet realistic challenge. Existing approaches that discover safety vulnerabilities either rely on manual red-teaming with human experts or employ automated methods using pre-defined templates and human-curated attack data, with most focusing on single-turn attacks. However, these methods did not explore the vast space of possible multi-turn attacks, failing to consider novel attack trajectories that emerge from complex dialogue dynamics and strategic conversation planning. This gap is particularly critical given recent findings that LLMs exhibit significantly higher vulnerability to multi-turn attacks compared to single-turn attacks. We propose DialTree-RPO, an on-policy reinforcement learning framework integrated with tree search that autonomously discovers diverse multi-turn attack strategies by treating the dialogue as a sequential decision-making problem, enabling systematic exploration without manually curated data. Through extensive experiments, our approach not only achieves more than 25.9% higher ASR across 10 target models compared to previous state-of-the-art approaches, but also effectively uncovers new attack strategies by learning optimal dialogue policies that maximize attack success across multiple turns.
Key Contributions
- DialTree-RPO: an on-policy RL framework with dialogue tree rollout and quality-aware pruning that formulates multi-turn red-teaming as sequential decision-making to systematically explore diverse jailbreak strategies without manually curated data
- Adaptive masking mechanism that addresses a format unlearning problem in multi-turn policy optimization, stabilizing RL training
- Achieves 81.5% average ASR across 12 target LLMs (including strongly-aligned Claude-4-Sonnet), outperforming prior SOTA by 44.2%, with strong cross-model transferability despite training on a small 1B model