defense 2025

Catching Contamination Before Generation: Spectral Kill Switches for Agents

Valentin Noël

0 citations · 52 references · arXiv

α

Published on arXiv

2511.05804

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

HFER exhibits robust bimodality (0.52 vs 0.05) across three model families in layers 2–5, enabling near-perfect context-contamination detection with sub-millisecond latency and no additional training.

HFER Spectral Kill Switch

Novel technique introduced


Agentic language models compose multi step reasoning chains, yet intermediate steps can be corrupted by inconsistent context, retrieval errors, or adversarial inputs, which makes post hoc evaluation too late because errors propagate before detection. We introduce a diagnostic that requires no additional training and uses only the forward pass to emit a binary accept or reject signal during agent execution. The method analyzes token graphs induced by attention and computes two spectral statistics in early layers, namely the high frequency energy ratio and spectral entropy. We formalize these signals, establish invariances, and provide finite sample estimators with uncertainty quantification. Under a two regime mixture assumption with a monotone likelihood ratio property, we show that a single threshold on the high frequency energy ratio is optimal in the Bayes sense for detecting context inconsistency. Empirically, the high frequency energy ratio exhibits robust bimodality during context verification across multiple model families, which enables gating decisions with overhead below one millisecond on our hardware and configurations. We demonstrate integration into retrieval augmented agent pipelines and discuss deployment as an inline safety monitor. The approach detects contamination while the model is still processing the text, before errors commit to the reasoning chain.


Key Contributions

  • Discovers a bimodal HFER (High Frequency Energy Ratio) regime (0.52 vs 0.05) in early transformer layers that distinguishes context-supported from context-contradicted statements with AUC ≈ 1.0
  • Proposes a training-free spectral kill-switch using attention-induced token graph analysis that adds sub-millisecond overhead and integrates inline into RAG agent pipelines
  • Provides theoretical guarantees: Bayes-optimal thresholding under a two-regime mixture model with monotone likelihood ratio property, with calibration requiring only 20 labeled examples

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timedigitalgrey_box
Applications
agentic airetrieval-augmented generationmulti-step reasoning pipelines