defense 2025

Verifiability-First Agents: Provable Observability and Lightweight Audit Agents for Controlling Autonomous LLM Systems

Abhivansh Gupta

0 citations · 20 references · arXiv

α

Published on arXiv

2512.17259

Excessive Agency

OWASP LLM Top 10 — LLM08

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Shifts the evaluation paradigm from measuring misalignment propensity to measuring how quickly and reliably misalignment can be detected and remediated via verifiable attestation mechanisms.

OPERA (Observability, Provable Execution, Red-team, Attestation)

Novel technique introduced


As LLM-based agents grow more autonomous and multi-modal, ensuring they remain controllable, auditable, and faithful to deployer intent becomes critical. Prior benchmarks measured the propensity for misaligned behavior and showed that agent personalities and tool access significantly influence misalignment. Building on these insights, we propose a Verifiability-First architecture that (1) integrates run-time attestations of agent actions using cryptographic and symbolic methods, (2) embeds lightweight Audit Agents that continuously verify intent versus behavior using constrained reasoning, and (3) enforces challenge-response attestation protocols for high-risk operations. We introduce OPERA (Observability, Provable Execution, Red-team, Attestation), a benchmark suite and evaluation protocol designed to measure (i) detectability of misalignment, (ii) time to detection under stealthy strategies, and (iii) resilience of verifiability mechanisms to adversarial prompt and persona injection. Our approach shifts the evaluation focus from how likely misalignment is to how quickly and reliably misalignment can be detected and remediated.


Key Contributions

  • Verifiability-First architecture integrating cryptographic and symbolic runtime attestations for LLM agent actions
  • Lightweight Audit Agents embedded in the pipeline to continuously verify intent-versus-behavior alignment
  • OPERA benchmark suite measuring misalignment detectability, time-to-detection, and resilience to adversarial prompt/persona injection

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timegrey_box
Datasets
OPERA benchmark suite
Applications
autonomous llm agentsmulti-modal agent systems