attack 2025

BLAST: A Stealthy Backdoor Leverage Attack against Cooperative Multi-Agent Deep Reinforcement Learning based Systems

Jing Fang , Saihao Yan , Xueyu Yin , Yinbo Yu , Chunwei Tian , Jiajia Liu

6 citations · 42 references · arXiv

α

Published on arXiv

2501.01593

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

BLAST achieves high attack success rate against VDN, QMIX, and MAPPO by backdooring only one agent out of the entire multi-agent team, while maintaining low clean performance variance that evades detection.

BLAST

Novel technique introduced


Recent studies have shown that cooperative multi-agent deep reinforcement learning (c-MADRL) is under the threat of backdoor attacks. Once a backdoor trigger is observed, it will perform malicious actions leading to failures or malicious goals. However, existing backdoor attacks suffer from several issues, e.g., instant trigger patterns lack stealthiness, the backdoor is trained or activated by an additional network, or all agents are backdoored. To this end, in this paper, we propose a novel backdoor leverage attack against c-MADRL, BLAST, which attacks the entire multi-agent team by embedding the backdoor only in a single agent. Firstly, we introduce adversary spatiotemporal behavior patterns as the backdoor trigger rather than manual-injected fixed visual patterns or instant status and control the period to perform malicious actions. This method can guarantee the stealthiness and practicality of BLAST. Secondly, we hack the original reward function of the backdoor agent via unilateral guidance to inject BLAST, so as to achieve the \textit{leverage attack effect} that can pry open the entire multi-agent system via a single backdoor agent. We evaluate our BLAST against 3 classic c-MADRL algorithms (VDN, QMIX, and MAPPO) in 2 popular c-MADRL environments (SMAC and Pursuit), and 2 existing defense mechanisms. The experimental results demonstrate that BLAST can achieve a high attack success rate while maintaining a low clean performance variance rate.


Key Contributions

  • Introduces spatiotemporal behavior patterns (distributed across a sequence of observations) as stealthy backdoor triggers, decoupled in time from the attack actions to evade anomaly detection.
  • Proposes a 'leverage attack effect' that compromises the entire c-MADRL team by backdooring only a single agent, using unilateral reward function manipulation to inject the backdoor.
  • Evaluates BLAST against three c-MADRL algorithms (VDN, QMIX, MAPPO) in two environments (SMAC, Pursuit) and three existing defenses, demonstrating high attack success with low clean performance variance.

🛡️ Threat Analysis

Model Poisoning

BLAST embeds a hidden backdoor in a single RL agent that activates only when a specific spatiotemporal behavior trigger pattern is observed, causing malicious team-wide behavior while maintaining normal performance otherwise — a classic trojan/backdoor injection into a trained model's policy via reward function manipulation.


Details

Domains
reinforcement-learning
Model Types
rlrnn
Threat Tags
white_boxtraining_timetargeted
Datasets
SMACPursuit
Applications
cooperative multi-agent systemsgame ai (starcraft)autonomous driving