attack 2025

Power to the Clients: Federated Learning in a Dictatorship Setting

Mohammadsajad Alipour , Mohammad Mohammadi Amiri

0 citations · 31 references · arXiv

α

Published on arXiv

2510.22149

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Dictator clients can entirely eliminate the influence of all benign participants on the global FL model using only minimal communication capabilities, with no knowledge of other clients' data or model internals.

Dictator Client Attack

Novel technique introduced


Federated learning (FL) has emerged as a promising paradigm for decentralized model training, enabling multiple clients to collaboratively learn a shared model without exchanging their local data. However, the decentralized nature of FL also introduces vulnerabilities, as malicious clients can compromise or manipulate the training process. In this work, we introduce dictator clients, a novel, well-defined, and analytically tractable class of malicious participants capable of entirely erasing the contributions of all other clients from the server model, while preserving their own. We propose concrete attack strategies that empower such clients and systematically analyze their effects on the learning process. Furthermore, we explore complex scenarios involving multiple dictator clients, including cases where they collaborate, act independently, or form an alliance in order to ultimately betray one another. For each of these settings, we provide a theoretical analysis of their impact on the global model's convergence. Our theoretical algorithms and findings about the complex scenarios including multiple dictator clients are further supported by empirical evaluations on both computer vision and natural language processing benchmarks.


Key Contributions

  • Introduces 'dictator clients' — a formally defined, analytically tractable class of Byzantine FL participants that fully erase benign client contributions while preserving their own, using only minimal inter-client communication.
  • Proposes concrete attack algorithms enabling dictator behavior under realistic capability constraints (no access to other clients' data or the global model's internals).
  • Analyzes multi-agent adversarial dynamics among multiple dictator clients, including collaboration, independent action, and alliance-then-betrayal scenarios, with theoretical convergence analysis and empirical validation.

🛡️ Threat Analysis

Data Poisoning Attack

Dictator clients are a formally defined class of Byzantine FL participants that send strategically manipulated updates to the server, erasing all benign clients' contributions from the global model — this is precisely Byzantine attacks in federated learning, the canonical ML02 threat.


Details

Domains
visionnlpfederated-learning
Model Types
federated
Threat Tags
training_timegrey_boxtargeted
Datasets
CIFAR-10CIFAR-100MNISTAG News
Applications
federated learningimage classificationtext classification