attack 2026

The Autonomy Tax: Defense Training Breaks LLM Agents

Shawn Li , Yue Zhao

0 citations

α

Published on arXiv

2603.19423

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Defense-trained models timeout on 99% of tasks vs 13% for baselines, with 73-86% sophisticated attack bypass and 25-71% false positive rates on benign content


Large language model (LLM) agents increasingly rely on external tools (file operations, API calls, database transactions) to autonomously complete complex multi-step tasks. Practitioners deploy defense-trained models to protect against prompt injection attacks that manipulate agent behavior through malicious observations or retrieved content. We reveal a fundamental \textbf{capability-alignment paradox}: defense training designed to improve safety systematically destroys agent competence while failing to prevent sophisticated attacks. Evaluating defended models against undefended baselines across 97 agent tasks and 1,000 adversarial prompts, we uncover three systematic biases unique to multi-step agents. \textbf{Agent incompetence bias} manifests as immediate tool execution breakdown, with models refusing or generating invalid actions on benign tasks before observing any external content. \textbf{Cascade amplification bias} causes early failures to propagate through retry loops, pushing defended models to timeout on 99\% of tasks compared to 13\% for baselines. \textbf{Trigger bias} leads to paradoxical security degradation where defended models perform worse than undefended baselines while straightforward attacks bypass defenses at high rates. Root cause analysis reveals these biases stem from shortcut learning: models overfit to surface attack patterns rather than semantic threat understanding, evidenced by extreme variance in defense effectiveness across attack categories. Our findings demonstrate that current defense paradigms optimize for single-turn refusal benchmarks while rendering multi-step agents fundamentally unreliable, necessitating new approaches that preserve tool execution competence under adversarial conditions.


Key Contributions

  • Identifies three agent-specific failure modes from defense training: agent incompetence bias (47-77% Step-1 failure), cascade amplification bias (99% timeout rates), and trigger bias (73-86% attack bypass)
  • Demonstrates capability-alignment paradox where defense training optimizes single-turn refusal benchmarks while destroying multi-step agent competence
  • Traces all failures to shortcut learning mechanism where models overfit surface attack patterns rather than semantic threat understanding

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
AgentDojo1,000 adversarial prompts across 97 agent tasks
Applications
autonomous agentstool-use agentsmulti-step task execution