defense 2026

Governance Architecture for Autonomous Agent Systems: Threats, Framework, and Engineering Practice

Yuxu Ge

0 citations

α

Published on arXiv

2603.07191

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

End-to-end pipeline evaluation achieves 96% interception rate across all four governance layers with ~980ms P50 latency; generalizes to InjecAgent with 99–100% interception rate.

Layered Governance Architecture (LGA)

Novel technique introduced


Autonomous agents powered by large language models introduce a class of execution-layer vulnerabilities -- prompt injection, retrieval poisoning, and uncontrolled tool invocation -- that existing guardrails fail to address systematically. In this work, we propose the Layered Governance Architecture (LGA), a four-layer framework comprising execution sandboxing (L1), intent verification (L2), zero-trust inter-agent authorization (L3), and immutable audit logging (L4). To evaluate LGA, we construct a bilingual benchmark (Chinese original, English via machine translation) of 1,081 tool-call samples -- covering prompt injection, RAG poisoning, and malicious skill plugins -- and apply it to OpenClaw, a representative open-source agent framework. Experimental results on Layer 2 intent verification with four local LLM judges (Qwen3.5-4B, Llama-3.1-8B, Qwen3.5-9B, Qwen2.5-14B) and one cloud judge (GPT-4o-mini) show that all five LLM judges intercept 93.0-98.5% of TC1/TC2 malicious tool calls, while lightweight NLI baselines remain below 10%. TC3 (malicious skill plugins) proves harder at 75-94% IR among judges with meaningful precision-recall balance, motivating complementary enforcement at Layers 1 and 3. Qwen2.5-14B achieves the best local balance (98% IR, approximately 10-20% FPR); a two-stage cascade (Qwen3.5-9B->GPT-4o-mini) achieves 91.9-92.6% IR with 1.9-6.7% FPR; a fully local cascade (Qwen3.5-9B->Qwen2.5-14B) achieves 94.7-95.6% IR with 6.0-9.7% FPR for data-sovereign deployments. An end-to-end pipeline evaluation (n=100) demonstrates that all four layers operate in concert with 96% IR and a total P50 latency of approximately 980 ms, of which the non-judge layers contribute only approximately 18 ms. Generalization to the external InjecAgent benchmark yields 99-100% interception, confirming robustness beyond our synthetic data.


Key Contributions

  • Layered Governance Architecture (LGA) with four enforcement layers: execution sandboxing, intent verification, zero-trust inter-agent authorization, and immutable audit logging
  • Bilingual benchmark of 1,081 tool-call samples covering prompt injection, RAG poisoning, and malicious skill plugins, evaluated on the OpenClaw agent framework
  • Empirical evaluation showing LLM-judge-based intent verification achieves 93–98.5% interception vs. <10% for NLI baselines, with a two-stage cascade reaching 94.7–95.6% IR at 6–9.7% FPR for data-sovereign deployments

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
Custom bilingual benchmark (1,081 tool-call samples)InjecAgent
Applications
autonomous llm agentsmulti-agent systemsrag-augmented agentsllm tool use