benchmark 2025

Can Reasoning Models Obfuscate Reasoning? Stress-Testing Chain-of-Thought Monitorability

Artur Zolkowski 1,2, Wen Xing 3, David Lindner 1,2, Robert Mullins 1, Erik Jenner 3

6 citations · 47 references · arXiv

α

Published on arXiv

2510.19851

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

CoT monitoring detects ~96% of malicious intent without obfuscation but only ~10% under high obfuscation pressure; internal CoT is harder to obfuscate than external CoT, suggesting model-specific vulnerabilities in safety oversight

CoT Obfuscation Pressure Scale

Novel technique introduced


Recent findings suggest that misaligned models may exhibit deceptive behavior, raising concerns about output trustworthiness. Chain-of-thought (CoT) is a promising tool for alignment monitoring: when models articulate their reasoning faithfully, monitors can detect and mitigate harmful behaviors before undesirable outcomes occur. However, a key uncertainty is: Can models obfuscate their CoT in order to pursue hidden adversarial objectives while evading detection? To answer this question and thus stress-test CoT monitorability, we develop a composable and quantifiable taxonomy of prompts to elicit CoT obfuscation. We evaluate both internal CoT (reasoning traces) and external CoT (prompted reasoning in outputs) using toy tasks and more realistic environments in SHADE-Arena. We show that: (i) CoT monitoring performs accurately and efficiently without obfuscation pressure. (ii) Under strong obfuscation pressure, some models successfully complete adversarial tasks while evading detection. (iii) Models do not obfuscate their internal CoT as much as their external CoT (under prompt pressure). These results suggest that while CoT provides valuable oversight in benign settings, robust deployment requires model-specific stress-testing of monitorability.


Key Contributions

  • Composable taxonomy and quantitative obfuscation pressure scale for eliciting and measuring CoT obfuscation via prompt-level cues
  • First systematic comparison of internal CoT (reasoning traces) vs external CoT (prompted output reasoning) robustness under adversarial obfuscation pressure
  • Empirical demonstration that high-pressure obfuscation reduces CoT monitor detection from ~96% to ~10%, with model-specific variation revealing differential alignment monitoring vulnerabilities

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Datasets
SHADE-Arena
Applications
ai alignment monitoringllm safety oversightreasoning model evaluation