defense 2025

Robust Decentralized Multi-armed Bandits: From Corruption-Resilience to Byzantine-Resilience

Zicheng Hu , Yuchen Wang , Cheng Chen

0 citations · arXiv

α

Published on arXiv

2511.10344

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

DeMABAR ensures each agent's regret incurs only an additive term proportional to the corruption budget, and remains robust when an unknown fraction of agents behave as Byzantine adversaries.

DeMABAR

Novel technique introduced


Decentralized cooperative multi-agent multi-armed bandits (DeCMA2B) considers how multiple agents collaborate in a decentralized multi-armed bandit setting. Though this problem has been extensively studied in previous work, most existing methods remain susceptible to various adversarial attacks. In this paper, we first study DeCMA2B with adversarial corruption, where an adversary can corrupt reward observations of all agents with a limited corruption budget. We propose a robust algorithm, called DeMABAR, which ensures that each agent's individual regret suffers only an additive term proportional to the corruption budget. Then we consider a more realistic scenario where the adversary can only attack a small number of agents. Our theoretical analysis shows that the DeMABAR algorithm can also almost completely eliminate the influence of adversarial attacks and is inherently robust in the Byzantine setting, where an unknown fraction of the agents can be Byzantine, i.e., may arbitrarily select arms and communicate wrong information. We also conduct numerical experiments to illustrate the robustness and effectiveness of the proposed method.


Key Contributions

  • Proposes DeMABAR, a robust algorithm for decentralized cooperative multi-armed bandits that bounds individual regret to an additive term proportional to the adversary's corruption budget
  • Proves that DeMABAR is inherently Byzantine-resilient, almost completely eliminating the influence of a bounded fraction of malicious agents
  • Provides theoretical regret analysis and numerical experiments demonstrating robustness in both corruption and Byzantine adversarial settings

🛡️ Threat Analysis

Data Poisoning Attack

The paper addresses two adversarial threat models: (1) reward observation corruption with a bounded budget across all agents, and (2) Byzantine agents that arbitrarily select arms and communicate false information — directly parallel to Byzantine attacks in federated/decentralized learning that degrade global model performance. DeMABAR is proposed as a Byzantine-fault-tolerant defense with provable robustness guarantees.


Details

Domains
reinforcement-learning
Model Types
rl
Threat Tags
training_timewhite_boxuntargeted
Applications
decentralized multi-agent reinforcement learningcooperative bandit algorithms