benchmark 2026

BackdoorAgent: A Unified Framework for Backdoor Attacks on LLM-based Agents

Yunhao Feng 1,2,3, Yige Li 4, Yutao Wu 5, Yingshui Tan 2, Yanming Guo 3, Yifan Ding 1,2, Kun Zhai 1, Xingjun Ma 1,6, Yu-Gang Jiang 1

1 citations · 50 references · arXiv

α

Published on arXiv

2601.04566

Model Poisoning

OWASP ML Top 10 — ML10

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Backdoor triggers injected at a single agent workflow stage persist and propagate through multiple intermediate states, with memory-stage attacks showing the highest persistence rate of 77.97% on a GPT backbone.

BackdoorAgent

Novel technique introduced


Large language model (LLM) agents execute tasks through multi-step workflows that combine planning, memory, and tool use. While this design enables autonomy, it also expands the attack surface for backdoor threats. Backdoor triggers injected into specific stages of an agent workflow can persist through multiple intermediate states and adversely influence downstream outputs. However, existing studies remain fragmented and typically analyze individual attack vectors in isolation, leaving the cross-stage interaction and propagation of backdoor triggers poorly understood from an agent-centric perspective. To fill this gap, we propose \textbf{BackdoorAgent}, a modular and stage-aware framework that provides a unified, agent-centric view of backdoor threats in LLM agents. BackdoorAgent structures the attack surface into three functional stages of agentic workflows, including \textbf{planning attacks}, \textbf{memory attacks}, and \textbf{tool-use attacks}, and instruments agent execution to enable systematic analysis of trigger activation and propagation across different stages. Building on this framework, we construct a standardized benchmark spanning four representative agent applications: \textbf{Agent QA}, \textbf{Agent Code}, \textbf{Agent Web}, and \textbf{Agent Drive}, covering both language-only and multimodal settings. Our empirical analysis shows that \textit{triggers implanted at a single stage can persist across multiple steps and propagate through intermediate states.} For instance, when using a GPT-based backbone, we observe trigger persistence in 43.58\% of planning attacks, 77.97\% of memory attacks, and 60.28\% of tool-stage attacks, highlighting the vulnerabilities of the agentic workflow itself to backdoor threats. To facilitate reproducibility and future research, our code and benchmark are publicly available at GitHub.


Key Contributions

  • BackdoorAgent: a modular, stage-aware framework that unifies and systematizes backdoor threat analysis across the three functional stages of LLM agent workflows (planning, memory, tool use)
  • A standardized benchmark spanning four representative agent applications (Agent QA, Agent Code, Agent Web, Agent Drive) covering both language-only and multimodal settings
  • Empirical characterization of cross-stage trigger persistence showing that single-stage backdoor triggers propagate through intermediate states (43.58% planning, 77.97% memory, 60.28% tool-stage using GPT backbone)

🛡️ Threat Analysis

Model Poisoning

The paper is fundamentally about backdoor/trojan attacks — triggers injected into agent workflow stages that activate targeted malicious behavior. It studies trigger persistence and propagation across planning, memory, and tool-use stages, which is core ML10 territory.


Details

Domains
nlpmultimodal
Model Types
llmvlm
Threat Tags
training_timeinference_timetargeted
Datasets
Agent QAAgent CodeAgent WebAgent Drive
Applications
llm agentsquestion answering agentscode generation agentsweb agentsautonomous driving agents