defense 2026

Basic Legibility Protocols Improve Trusted Monitoring

Ashwin Sreevatsa 1, Sebastian Prasanna 1, Cody Rushing 2

0 citations · 30 references · arXiv (Cornell University)

α

Published on arXiv

2602.10153

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Allowing untrusted models to comment their code improves monitor detection of backdoors without sacrificing task performance, because backdoored code is systematically harder to justify with plausible explanations than honest code.

Legibility Protocols

Novel technique introduced


The AI Control research agenda aims to develop control protocols: safety techniques that prevent untrusted AI systems from taking harmful actions during deployment. Because human oversight is expensive, one approach is trusted monitoring, where weaker, trusted models oversee stronger, untrusted models$\unicode{x2013}$but this often fails when the untrusted model's actions exceed the monitor's comprehension. We introduce legibility protocols, which encourage the untrusted model to take actions that are easier for a monitor to evaluate. We perform control evaluations in the APPS coding setting, where an adversarial agent attempts to write backdoored code without detection. We study legibility protocols that allow the untrusted model to thoroughly document its code with comments$\unicode{x2013}$in contrast to prior work, which removed comments to prevent deceptive ones. We find that: (i) commenting protocols improve safety without sacrificing task performance relative to comment-removal baselines; (ii) commenting disproportionately benefits honest code, which typically has a natural explanation that resolves monitor suspicion, whereas backdoored code frequently lacks an easy justification; (iii) gains from commenting increase with monitor strength, as stronger monitors better distinguish genuine justifications from only superficially plausible ones.


Key Contributions

  • Introduces legibility protocols that encourage untrusted LLM agents to produce outputs easier for a trusted monitor to evaluate, as an extension of trusted monitoring in the AI Control framework
  • Demonstrates that commenting protocols outperform comment-removal baselines in the APPS coding backdoor control evaluation, improving safety without sacrificing task performance
  • Identifies asymmetric benefit mechanism: honest code naturally has justifiable explanations whereas backdoored code is harder to justify, and this advantage scales with monitor capability

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timegrey_boxtargeted
Datasets
APPS
Applications
code generationai agent monitoringsoftware engineering automation