Neural Chameleons: Language Models Can Learn to Hide Their Thoughts from Unseen Activation Monitors
Max McGuinness 1, Alex Serrano 1,2, Luke Bailey 3, Scott Emmons 1
Published on arXiv
2512.11949
Model Poisoning
OWASP ML Top 10 — ML10
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
An LLM fine-tuned to evade activation monitors for benign concepts (HTML, chemistry) successfully zero-shot evades unseen safety monitors for deception and harmfulness across Llama, Gemma, and Qwen model families, even against post-hoc monitors trained on the frozen model.
Neural Chameleons
Novel technique introduced
Activation monitoring, which probes a model's internal states using lightweight classifiers, is an emerging tool for AI safety. However, its worst-case robustness under a misalignment threat model--where a model might learn to actively conceal its internal states--remains untested. Focusing on this threat model, we ask: could a model learn to evade previously unseen activation monitors? Our core contribution is to stress-test the learnability of this behavior. We demonstrate that finetuning can create Neural Chameleons: models capable of zero-shot evading activation monitors. Specifically, we fine-tune an LLM to evade monitors for a set of benign concepts (e.g., languages, HTML) when conditioned on a trigger of the form: "You are being probed for {concept}". We show that this learned mechanism generalizes zero-shot: by substituting {concept} with a safety-relevant term like 'deception', the model successfully evades previously unseen safety monitors. We validate this phenomenon across diverse model families (Llama, Gemma, Qwen), showing that the evasion succeeds even against monitors trained post hoc on the model's frozen weights. This evasion is highly selective, targeting only the specific concept mentioned in the trigger, and having a modest impact on model capabilities on standard benchmarks. Using Gemma-2-9b-it as a case study, a mechanistic analysis reveals this is achieved via a targeted manipulation that moves activations into a low-dimensional subspace. While stronger defenses like monitor ensembles and non-linear classifiers show greater resilience, the model retains a non-trivial evasion capability. Our work provides a proof-of-concept for this failure mode and a tool to evaluate the worst-case robustness of monitoring techniques against misalignment threat models.
Key Contributions
- Demonstrates that fine-tuning creates 'Neural Chameleons' — LLMs with trigger-activated capability to zero-shot evade previously unseen activation safety monitors, including safety-relevant ones (deception, harmfulness) trained only on benign concepts
- Shows the evasion generalizes zero-shot across unseen safety concepts and model families (Llama, Gemma, Qwen), succeeding even against post-hoc monitors trained on the frozen model organism
- Provides mechanistic analysis revealing the evasion operates by moving activations into a low-dimensional subspace, and evaluates stronger defenses (monitor ensembles, nonlinear classifiers) that show partial but incomplete resilience
🛡️ Threat Analysis
The paper's core contribution is instilling a trigger-activated hidden behavior via fine-tuning: when the trigger 'You are being probed for {concept}' appears, the model covertly manipulates its internal activations to evade probe classifiers while behaving normally otherwise. This is definitionally a neural trojan/backdoor — hidden targeted behavior that activates only with a specific trigger.