attack 2025

Collaborative Shadows: Distributed Backdoor Attacks in LLM-Based Multi-Agent Systems

Pengyu Zhu 1,2, Lijun Li 2, Yaxing Lyu 3, Li Sun 4, Sen Su 1, Jing Shao 2

0 citations · 54 references · arXiv

α

Published on arXiv

2510.11246

Model Poisoning

OWASP ML Top 10 — ML10

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

Distributed backdoor attack achieves over 95% attack success rate in LLM multi-agent systems without degrading benign task performance.

Distributed Backdoor Attack (DBA-MAS)

Novel technique introduced


LLM-based multi-agent systems (MAS) demonstrate increasing integration into next-generation applications, but their safety in backdoor attacks remains largely underexplored. However, existing research has focused exclusively on single-agent backdoor attacks, overlooking the novel attack surfaces introduced by agent collaboration in MAS. To bridge this gap, we present the first Distributed Backdoor Attack tailored to MAS. We decompose the backdoor into multiple distributed attack primitives that are embedded within MAS tools. These primitives remain dormant individually but collectively activate only when agents collaborate in a specific sequence, thereby assembling the full backdoor to execute targeted attacks such as data exfiltration. To fully assess this threat, we introduce a benchmark for multi-role collaborative tasks and a sandboxed framework to evaluate. Extensive experiments demonstrate that our attack achieves an attack success rate exceeding 95% without degrading performance on benign tasks. This work exposes novel backdoor attack surfaces that exploit agent collaboration, underscoring the need to move beyond single-agent protection. Code and benchmark are available at https://github.com/whfeLingYu/Distributed-Backdoor-Attacks-in-MAS.


Key Contributions

  • First distributed backdoor attack for LLM-based multi-agent systems, decomposing a backdoor into distributed primitives embedded in MAS tools that only activate when agents collaborate in a specific sequence
  • New benchmark for multi-role collaborative tasks and a sandboxed evaluation framework to assess MAS backdoor threats
  • Demonstrates >95% attack success rate on targeted behaviors (e.g., data exfiltration) with no degradation on benign tasks

🛡️ Threat Analysis

Model Poisoning

The core contribution is a backdoor attack: attack primitives are hidden/dormant individually and activate only upon a specific trigger condition (agents collaborating in a particular sequence), exhibiting exactly the hidden, targeted, trigger-based behavior that defines ML10. The attack enables targeted outcomes like data exfiltration.


Details

Domains
nlp
Model Types
llm
Threat Tags
targetedinference_timegrey_box
Datasets
MultiRoleBench (introduced by authors)AgentBench
Applications
llm-based multi-agent systemsdata exfiltrationagentic ai pipelines