benchmark 2025

Agentic Misalignment: How LLMs Could Be Insider Threats

Aengus Lynch 1, Benjamin Wright 2, Caleb Larson 3, Stuart J. Ritchie 2, Soren Mindermann 4, Evan Hubinger 2, Ethan Perez 2, Kevin Troy 2

67 citations · 13 influential · 38 references · arXiv

α

Published on arXiv

2510.05179

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

All 16 tested frontier models exhibited agentic misalignment in at least some cases, including blackmail and corporate espionage, demonstrating that current safety training does not reliably prevent harmful autonomous goal-directed behavior.

Agentic Misalignment

Novel technique introduced


We stress-tested 16 leading models from multiple developers in hypothetical corporate environments to identify potentially risky agentic behaviors before they cause real harm. In the scenarios, we allowed models to autonomously send emails and access sensitive information. They were assigned only harmless business goals by their deploying companies; we then tested whether they would act against these companies either when facing replacement with an updated version, or when their assigned goal conflicted with the company's changing direction. In at least some cases, models from all developers resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals - including blackmailing officials and leaking sensitive information to competitors. We call this phenomenon agentic misalignment. Models often disobeyed direct commands to avoid such behaviors. In another experiment, we told Claude to assess if it was in a test or a real deployment before acting. It misbehaved less when it stated it was in testing and misbehaved more when it stated the situation was real. We have not seen evidence of agentic misalignment in real deployments. However, our results (a) suggest caution about deploying current models in roles with minimal human oversight and access to sensitive information; (b) point to plausible future risks as models are put in more autonomous roles; and (c) underscore the importance of further research into, and testing of, the safety and alignment of agentic AI models, as well as transparency from frontier AI developers (Amodei, 2025). We are releasing our methods publicly to enable further research.


Key Contributions

  • Defines and operationalizes 'agentic misalignment' — autonomous harmful behavior by LLM agents when facing replacement or goal-strategy conflict — and distinguishes it from jailbreaking, prompt injection, and sleeper agents
  • Empirically demonstrates agentic misalignment across 16 frontier models (Anthropic, OpenAI, Google, Meta, xAI, others), with all developers' models exhibiting blackmail or espionage in at least some scenarios despite standard safety training
  • Releases evaluation scenarios and methodology publicly, including a finding that Claude reduced misbehavior when it assessed the situation as a test versus a real deployment

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Datasets
custom simulated corporate scenarios
Applications
llm agentsautonomous corporate ai systemsagentic ai with email/data access