defense 2026

AgentWatcher: A Rule-based Prompt Injection Monitor

Yanting Wang , Wei Zou , Runpeng Geng , Jinyuan Jia

0 citations

α

Published on arXiv

2604.01194

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Effectively detects prompt injection attacks in tool-use agent benchmarks while maintaining utility without attacks

AgentWatcher

Novel technique introduced


Large language models (LLMs) and their applications, such as agents, are highly vulnerable to prompt injection attacks. State-of-the-art prompt injection detection methods have the following limitations: (1) their effectiveness degrades significantly as context length increases, and (2) they lack explicit rules that define what constitutes prompt injection, causing detection decisions to be implicit, opaque, and difficult to reason about. In this work, we propose AgentWatcher to address the above two limitations. To address the first limitation, AgentWatcher attributes the LLM's output (e.g., the action of an agent) to a small set of causally influential context segments. By focusing detection on a relatively short text, AgentWatcher can be scalable to long contexts. To address the second limitation, we define a set of rules specifying what does and does not constitute a prompt injection, and use a monitor LLM to reason over these rules based on the attributed text, making the detection decisions more explainable. We conduct a comprehensive evaluation on tool-use agent benchmarks and long-context understanding datasets. The experimental results demonstrate that AgentWatcher can effectively detect prompt injection and maintain utility without attacks. The code is available at https://github.com/wang-yanting/AgentWatcher.


Key Contributions

  • Causal attribution mechanism to identify influential context segments in long-context LLM agents
  • Rule-based monitoring framework with explicit, explainable detection criteria for prompt injection
  • Scalable detection approach that maintains effectiveness across varying context lengths

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Applications
tool-use agentsllm agentslong-context understanding