attack 2026

Attack Selection Reduces Safety in Concentrated AI Control Settings against Trusted Monitoring

Joachim Schaeffer 1, Arjun Khandelwal 1,2, Tyler Tracy 3

0 citations · 29 references · arXiv (Cornell University)

α

Published on arXiv

2602.04930

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Eliciting attack selection reasoning in LLMs reduces safety from 99% to 59% at 0.5% auditing budget, revealing that current AI control evaluations may be significantly over-optimistic.

Attack Selection

Novel technique introduced


Future AI deployments will likely be monitored for malicious behaviour. The ability of these AIs to subvert monitors by adversarially selecting against them - attack selection - is particularly concerning. To study this, we let a red team create attack policies that attempt to insert attacks into code without being caught by a monitor in the concentrated BigCodeBench backdooring setting. We decompose attack selection into two problems: mapping attacks to a quality score and mapping quality scores to submission probabilities. We frame attack selection as a classification problem and show that safety is significantly more sensitive to FPR than TPR. We find that prompting the attacker model to reason about the monitor while being cautious with attack selection reduces safety from a baseline of 99% to 59% at 0.5% auditing budget, emphasizing that eliciting attack selection capabilities of models is vital to avoid overly optimistic safety scores in control evaluations.


Key Contributions

  • Decomposes attack selection into quality scoring and submission probability mapping, framed as a classification problem to study monitor subversion
  • Shows AI control safety is significantly more sensitive to false positive rate (FPR) than true positive rate (TPR), with implications for evaluation design
  • Demonstrates that prompting attacker LLMs to reason about the monitor while being cautious reduces safety from 99% to 59% at only 0.5% auditing budget

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
BigCodeBench
Applications
ai control evaluationcode generation monitoringllm safety oversight