benchmark 2026

Unsafer in Many Turns: Benchmarking and Defending Multi-Turn Safety Risks in Tool-Using Agents

Xu Li 1, Simon Yu 1, Minzhou Pan 1,2, Yiyou Sun 3, Bo Li 4,2, Dawn Song 3,2, Xue Lin 1, Weiyan Shi 1

0 citations · 66 references · arXiv (Cornell University)

α

Published on arXiv

2602.13379

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

Multi-turn interactions increase Attack Success Rate by 16% on average across open and closed models; ToolShield reduces ASR by 30% in multi-turn settings.

ToolShield / MT-AgentRisk

Novel technique introduced


LLM-based agents are becoming increasingly capable, yet their safety lags behind. This creates a gap between what agents can do and should do. This gap widens as agents engage in multi-turn interactions and employ diverse tools, introducing new risks overlooked by existing benchmarks. To systematically scale safety testing into multi-turn, tool-realistic settings, we propose a principled taxonomy that transforms single-turn harmful tasks into multi-turn attack sequences. Using this taxonomy, we construct MT-AgentRisk (Multi-Turn Agent Risk Benchmark), the first benchmark to evaluate multi-turn tool-using agent safety. Our experiments reveal substantial safety degradation: the Attack Success Rate (ASR) increases by 16% on average across open and closed models in multi-turn settings. To close this gap, we propose ToolShield, a training-free, tool-agnostic, self-exploration defense: when encountering a new tool, the agent autonomously generates test cases, executes them to observe downstream effects, and distills safety experiences for deployment. Experiments show that ToolShield effectively reduces ASR by 30% on average in multi-turn interactions. Our code is available at https://github.com/CHATS-lab/ToolShield.


Key Contributions

  • A principled taxonomy that transforms single-turn harmful tasks into multi-turn attack sequences, covering 12 attack categories for tool-using agents.
  • MT-AgentRisk, the first benchmark for evaluating multi-turn tool-using LLM agent safety, revealing a 16% average ASR increase in multi-turn vs. single-turn settings.
  • ToolShield, a training-free, tool-agnostic self-exploration defense where agents autonomously generate test cases and distill safety experiences before deployment, reducing ASR by 30% on average.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
MT-AgentRisk
Applications
llm-based agentstool-using ai assistantsagentic ai systems