defense 2025

Policy-as-Prompt: Turning AI Governance Rules into Guardrails for AI Agents

Gauri Kholkar , Ratinder Ahuja

2 citations · 14 references · arXiv

α

Published on arXiv

2509.23994

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

The system reduces prompt-injection risk, blocks out-of-scope requests, and limits toxic outputs while generating auditable rationales aligned with AI governance frameworks across multiple state-of-the-art LLMs.

Policy as Prompt (POLICY-TREE-GEN)

Novel technique introduced


As autonomous AI agents are used in regulated and safety-critical settings, organizations need effective ways to turn policy into enforceable controls. We introduce a regulatory machine learning framework that converts unstructured design artifacts (like PRDs, TDDs, and code) into verifiable runtime guardrails. Our Policy as Prompt method reads these documents and risk controls to build a source-linked policy tree. This tree is then compiled into lightweight, prompt-based classifiers for real-time runtime monitoring. The system is built to enforce least privilege and data minimization. For conformity assessment, it provides complete provenance, traceability, and audit logging, all integrated with a human-in-the-loop review process. Evaluations show our system reduces prompt-injection risk, blocks out-of-scope requests, and limits toxic outputs. It also generates auditable rationales aligned with AI governance frameworks. By treating policies as executable prompts (a policy-as-code for agents), this approach enables secure-by-design deployment, continuous compliance, and scalable AI safety and AI security assurance for regulatable ML.


Key Contributions

  • Policy-as-Prompt pipeline that automatically extracts security constraints from unstructured development artifacts (PRDs, TDDs, code) and builds a source-linked policy tree
  • Compilation of verified policy trees into lightweight prompt-based classifiers that act as real-time 'judge' LLMs enforcing least-privilege at agent input/output boundaries
  • End-to-end audit framework providing provenance, traceability, and human-in-the-loop review aligned with EU AI Act, NIST AI RMF, and ISO/IEC 42001

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timedigital
Applications
ai agentsllm-based applicationshr chatbotsautonomous agents in regulated settings