defense 2026

Robust Safety Monitoring of Language Models via Activation Watermarking

Toluwani Aremu , Daniil Ognev , Samuele Poppi , Nils Lukas

0 citations

α

Published on arXiv

2603.23171

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Activation watermarking outperforms guard baselines by up to 52% detection rate under adaptive attackers who know the monitoring algorithm but not the secret key

Activation Watermarking

Novel technique introduced


Large language models (LLMs) can be misused to reveal sensitive information, such as weapon-making instructions or writing malware. LLM providers rely on $\emph{monitoring}$ to detect and flag unsafe behavior during inference. An open security challenge is $\emph{adaptive}$ adversaries who craft attacks that simultaneously (i) evade detection while (ii) eliciting unsafe behavior. Adaptive attackers are a major concern as LLM providers cannot patch their security mechanisms, since they are unaware of how their models are being misused. We cast $\emph{robust}$ LLM monitoring as a security game, where adversaries who know about the monitor try to extract sensitive information, while a provider must accurately detect these adversarial queries at low false positive rates. Our work (i) shows that existing LLM monitors are vulnerable to adaptive attackers and (ii) designs improved defenses through $\emph{activation watermarking}$ by carefully introducing uncertainty for the attacker during inference. We find that $\emph{activation watermarking}$ outperforms guard baselines by up to $52\%$ under adaptive attackers who know the monitoring algorithm but not the secret key.


Key Contributions

  • Formulates robust LLM monitoring as a security game against adaptive adversaries who know the monitoring algorithm
  • Proposes activation watermarking that introduces uncertainty for attackers during inference by watermarking internal activations
  • Demonstrates that activation watermarking outperforms existing guard baselines by up to 52% against adaptive attackers

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_boxtargeted
Applications
llm safety monitoringjailbreak detectionharmful content filtering