AgentDyn: A Dynamic Open-Ended Benchmark for Evaluating Prompt Injection Attacks of Real-World Agent Security System
Hao Li 1, Ruoyao Wen 1, Shanghao Shi 1, Ning Zhang 1, Chaowei Xiao 2
Published on arXiv
2602.03117
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Evaluation of ten SOTA defenses shows all are either not secure enough or cause significant over-defense, confirming no existing defense is ready for real-world agentic deployment.
AgentDyn
Novel technique introduced
AI agents that autonomously interact with external tools and environments show great promise across real-world applications. However, the external data which agent consumes also leads to the risk of indirect prompt injection attacks, where malicious instructions embedded in third-party content hijack agent behavior. Guided by benchmarks, such as AgentDojo, there has been significant amount of progress in developing defense against the said attacks. As the technology continues to mature, and that agents are increasingly being relied upon for more complex tasks, there is increasing pressing need to also evolve the benchmark to reflect threat landscape faced by emerging agentic systems. In this work, we reveal three fundamental flaws in current benchmarks and push the frontier along these dimensions: (i) lack of dynamic open-ended tasks, (ii) lack of helpful instructions, and (iii) simplistic user tasks. To bridge this gap, we introduce AgentDyn, a manually designed benchmark featuring 60 challenging open-ended tasks and 560 injection test cases across Shopping, GitHub, and Daily Life. Unlike prior static benchmarks, AgentDyn requires dynamic planning and incorporates helpful third-party instructions. Our evaluation of ten state-of-the-art defenses suggests that almost all existing defenses are either not secure enough or suffer from significant over-defense, revealing that existing defenses are still far from real-world deployment. Our benchmark is available at https://github.com/leolee99/AgentDyn.
Key Contributions
- Identifies three fundamental flaws in existing prompt injection benchmarks: lack of dynamic open-ended tasks, lack of helpful third-party instructions, and overly simplistic user tasks.
- Introduces AgentDyn with 60 manually designed open-ended tasks and 560 injection test cases across Shopping, GitHub, and Daily Life domains requiring dynamic planning.
- Evaluates ten state-of-the-art defenses, revealing that virtually all are either insufficiently secure or suffer from significant over-defense, making none suitable for real-world deployment.