defense 2026

Agent-Sentry: Bounding LLM Agents via Execution Provenance

Rohan Sequeira 1, Stavros Damianakis 1, Umar Iqbal 2, Konstantinos Psounis 1

0 citations

α

Published on arXiv

2603.22868

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Prevents over 90% of attacks triggering out-of-bounds executions while preserving up to 98% of system utility

Agent-Sentry

Novel technique introduced


Agentic computing systems, which autonomously spawn new functionalities based on natural language instructions, are becoming increasingly prevalent. While immensely capable, these systems raise serious security, privacy, and safety concerns. Fundamentally, the full set of functionalities offered by these systems, combined with their probabilistic execution flows, is not known beforehand. Given this lack of characterization, it is non-trivial to validate whether a system has successfully carried out the user's intended task or instead executed irrelevant actions, potentially as a consequence of compromise. In this paper, we propose Agent-Sentry, a framework that attempts to bound agentic systems to address this problem. Our key insight is that agentic systems are designed for specific use cases and therefore need not expose unbounded or unspecified functionalities. Once bounded, these systems become easier to scrutinize. Agent-Sentry operationalizes this insight by uncovering frequent functionalities offered by an agentic system, along with their execution traces, to construct behavioral bounds. It then learns a policy from these traces and blocks tool calls that deviate from learned behaviors or that misalign with user intent. Our evaluation shows that Agent-Sentry helps prevent over 90\% of attacks that attempt to trigger out-of-bounds executions, while preserving up to 98\% of system utility.


Key Contributions

  • Framework that learns behavioral bounds from execution traces to constrain LLM agent functionalities
  • Policy-based enforcement mechanism that blocks tool calls deviating from learned behaviors or user intent
  • Evaluation showing 90%+ attack prevention rate with 98% utility preservation

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Applications
agentic ai systemsautonomous agentsllm tool use