benchmark 2025

SafeToolBench: Pioneering a Prospective Benchmark to Evaluating Tool Utilization Safety in LLMs

Hongfei Xia 1, Hongru Wang 2, Zeming Liu 3, Qian Yu 3, Yuhang Guo 1, Haifeng Wang 4

0 citations

α

Published on arXiv

2509.07315

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Existing retrospective evaluation approaches fail to capture all tool-use risks, while SafeInstructTool's nine-dimension prospective framework significantly enhances LLM safety awareness across four tested models.

SafeInstructTool

Novel technique introduced


Large Language Models (LLMs) have exhibited great performance in autonomously calling various tools in external environments, leading to better problem solving and task automation capabilities. However, these external tools also amplify potential risks such as financial loss or privacy leakage with ambiguous or malicious user instructions. Compared to previous studies, which mainly assess the safety awareness of LLMs after obtaining the tool execution results (i.e., retrospective evaluation), this paper focuses on prospective ways to assess the safety of LLM tool utilization, aiming to avoid irreversible harm caused by directly executing tools. To this end, we propose SafeToolBench, the first benchmark to comprehensively assess tool utilization security in a prospective manner, covering malicious user instructions and diverse practical toolsets. Additionally, we propose a novel framework, SafeInstructTool, which aims to enhance LLMs' awareness of tool utilization security from three perspectives (i.e., \textit{User Instruction, Tool Itself, and Joint Instruction-Tool}), leading to nine detailed dimensions in total. We experiment with four LLMs using different methods, revealing that existing approaches fail to capture all risks in tool utilization. In contrast, our framework significantly enhances LLMs' self-awareness, enabling a more safe and trustworthy tool utilization.


Key Contributions

  • SafeToolBench: first prospective benchmark with 1,200 adversarial instructions across 16 domains and 4 risk categories (Privacy Leak, Property Damage, Physical Injury, Bias & Offensiveness) to evaluate LLM tool safety before execution
  • SafeInstructTool: a nine-dimension safety framework covering User Instruction, Tool Itself, and Joint Instruction-Tool perspectives to enhance LLMs' risk awareness prior to tool execution
  • Empirical evaluation showing existing retrospective approaches miss critical tool-use risks, while SafeInstructTool significantly improves prospective safety across four LLMs

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timetargeted
Datasets
SafeToolBench (1,200 adversarial instructions, 16 domains)
Applications
llm tool useautonomous agentsfunction callingapi integration