defense 2025

Output Supervision Can Obfuscate the Chain of Thought

Jacob Drori , Luke Marks , Bryce Woodworth , Alex Cloud , Alexander Matt Turner

1 citations · 43 references · arXiv

α

Published on arXiv

2511.11584

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Both parametric and conditional feedback spillover cause obfuscated CoTs even under output-only supervision; proposed mitigations significantly reduce obfuscation while maintaining task performance across all three RL environments.

Reward Targeting / Mind-Face separation

Novel technique introduced


OpenAI (2025) showed that training against a chain of thought (CoT) monitor can cause obfuscated CoTs, which contain bad behavior the monitor cannot detect. They proposed to keep CoTs monitorable by training only against output monitors that do not have access to CoT. We show that such training can still cause obfuscated CoTs via two mechanisms. First, when a model is trained to produce a safe-looking output, that model may generalize to making its CoTs look safe. Second, since later tokens are conditioned on earlier ones, safe-looking CoTs may increase the likelihood of safe outputs, causing safe-looking CoTs to be reinforced. We introduce two mitigations to address these two issues, which achieve a Pareto improvement in terms of monitorability and task performance compared to regular training.


Key Contributions

  • Identifies 'feedback spillover' — two mechanisms (parametric and conditional) by which output-only monitor training still causes CoT obfuscation, with mathematical analysis of why it occurs
  • Demonstrates feedback spillover empirically across three RL environments (polynomial factoring, question-with-hints, multi-turn terminal interaction)
  • Proposes two mitigations — a Mind/Face dual-model architecture and a 'reward targeting' gradient modification — that achieve a Pareto improvement in CoT monitorability and task performance

🛡️ Threat Analysis


Details

Domains
nlpreinforcement-learning
Model Types
llmtransformerrl
Threat Tags
training_time
Applications
llm safety monitoringchain-of-thought oversightai alignment