defense 2026

How does information access affect LLM monitors' ability to detect sabotage?

Rauno Arike 1,2,3, Raja Mehta Moreno 4,5, Rohan Subramani 1,2,3,4,6, Shubhorup Biswas 1, Francis Rhys Ward

0 citations · arXiv

α

Published on arXiv

2601.21112

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

EaE monitoring improves sabotage detection by 16.8 percentage points over the next-best approach in BigCodeBench-Sabotage using a GPT-4.1-mini monitor.

Extract-and-Evaluate (EaE) monitoring

Novel technique introduced


Frontier language model agents can exhibit misaligned behaviors, including deception, exploiting reward hacks, and pursuing hidden objectives. To control potentially misaligned agents, we can use LLMs themselves to monitor for misbehavior. In this paper, we study how information access affects LLM monitor performance. While one might expect that monitors perform better when they have access to more of the monitored agents' reasoning and actions, we find that contemporary systems often perform better with less information, a phenomenon we call the less-is-more effect for automated oversight. We demonstrate this phenomenon, and analyze the conditions under which it occurs, in three evaluation environments where agents must conduct sabotage while evading monitors. Leveraging this effect, we introduce extract-and-evaluate (EaE) monitoring -- a new hierarchical approach where one monitor isolates relevant excerpts from the monitored agent's trajectory and another monitor scores these excerpts. In BigCodeBench-Sabotage with a GPT-4.1-mini monitor, EaE improves sabotage detection rates by 16.8 percentage points over the next-best approach. In other settings, EaE either outperforms or is competitive with baselines. In addition, we find that agents unaware of being monitored can be caught much more easily and that monitors scoring longer reasoning traces gain more from information filtering. Lastly, we conduct a cost-performance analysis and find that Gemini and Claude models with monitoring techniques that involve information filtering occupy much of the Pareto frontier.


Key Contributions

  • Discovers and systematically analyzes the 'less-is-more effect' — LLM monitors frequently perform better with reduced information access due to post-hoc justifications and context distraction
  • Introduces extract-and-evaluate (EaE) monitoring, a hierarchical approach where one monitor isolates relevant trajectory excerpts and a second scores them, improving sabotage detection by 16.8 pp over baselines
  • Cost-performance analysis showing Gemini and Claude models with information filtering dominate the Pareto frontier of monitoring approaches

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timegrey_box
Datasets
BigCodeBench-Sabotage
Applications
llm agent monitoringai control systemsautomated oversight of agentic pipelines