benchmark 2026

Evaluating Privilege Usage of Agents on Real-World Tools

Quan Zhang 1,2, Lianhang Fu 2, Lvsi Lian 1, Gwihwan Go 3, Yujue Wang 3, Chijin Zhou 3,1, Yu Jiang 3, Geguang Pu 1

0 citations

α

Published on arXiv

2603.28166

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

LLM agents achieve 84.80% attack success rate under sophisticated prompt injection attacks in real-world tool usage scenarios despite basic security awareness

GrantBox

Novel technique introduced


Equipping LLM agents with real-world tools can substantially improve productivity. However, granting agents autonomy over tool use also transfers the associated privileges to both the agent and the underlying LLM. Improper privilege usage may lead to serious consequences, including information leakage and infrastructure damage. While several benchmarks have been built to study agents' security, they often rely on pre-coded tools and restricted interaction patterns. Such crafted environments differ substantially from the real-world, making it hard to assess agents' security capabilities in critical privilege control and usage. Therefore, we propose GrantBox, a security evaluation sandbox for analyzing agent privilege usage. GrantBox automatically integrates real-world tools and allows LLM agents to invoke genuine privileges, enabling the evaluation of privilege usage under prompt injection attacks. Our results indicate that while LLMs exhibit basic security awareness and can block some direct attacks, they remain vulnerable to more sophisticated attacks, resulting in an average attack success rate of 84.80% in carefully crafted scenarios.


Key Contributions

  • GrantBox sandbox that automatically integrates real-world tools with genuine privilege execution for LLM agent security evaluation
  • Evaluation framework for measuring agent privilege usage under prompt injection attacks in realistic environments
  • Empirical findings showing 84.80% average attack success rate against LLM agents with sophisticated prompt injection techniques

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
llm agents with tool usefunction calling systemsautonomous agent security