Can AI Keep a Secret? Contextual Integrity Verification: A Provable Security Architecture for LLMs
Published on arXiv
2508.09288
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
CIV achieves 0% attack success rate against prompt injection attacks on frozen Llama-3-8B and Mistral-7B while preserving 93.1% token-level output similarity and no degradation in model perplexity.
Contextual Integrity Verification (CIV)
Novel technique introduced
Large language models (LLMs) remain acutely vulnerable to prompt injection and related jailbreak attacks; heuristic guardrails (rules, filters, LLM judges) are routinely bypassed. We present Contextual Integrity Verification (CIV), an inference-time security architecture that attaches cryptographically signed provenance labels to every token and enforces a source-trust lattice inside the transformer via a pre-softmax hard attention mask (with optional FFN/residual gating). CIV provides deterministic, per-token non-interference guarantees on frozen models: lower-trust tokens cannot influence higher-trust representations. On benchmarks derived from recent taxonomies of prompt-injection vectors (Elite-Attack + SoK-246), CIV attains 0% attack success rate under the stated threat model while preserving 93.1% token-level similarity and showing no degradation in model perplexity on benign tasks; we note a latency overhead attributable to a non-optimized data path. Because CIV is a lightweight patch -- no fine-tuning required -- we demonstrate drop-in protection for Llama-3-8B and Mistral-7B. We release a reference implementation, an automated certification harness, and the Elite-Attack corpus to support reproducible research.
Key Contributions
- CIV architecture that binds each token to a trust level via HMAC-SHA-256 and enforces a source-trust lattice through a pre-softmax hard attention mask, requiring no fine-tuning on frozen models.
- Formal non-interference proof establishing that lower-trust tokens are algebraically incapable of influencing higher-trust representations, providing deterministic rather than probabilistic security guarantees.
- Reference implementation for Llama-3-8B and Mistral-7B achieving 0% attack success rate on the Elite-Attack + SoK-246 benchmark with 93.1% output fidelity preserved, plus release of the Elite-Attack corpus.