attack 2026

XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers

Israt Jahan Mouri 1, Muhammad Ridowan 2, Muhammad Abdullah Adnan 1

0 citations

α

Published on arXiv

2604.09489

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

XFED bypasses eight state-of-the-art Byzantine-robust defenses while requiring no coordination between attackers

XFED

Novel technique introduced


Model poisoning attacks pose a significant security threat to Federated Learning (FL). Most existing model poisoning attacks rely on collusion, requiring adversarial clients to coordinate by exchanging local benign models and synchronizing the generation of their poisoned updates. However, sustaining such coordination is increasingly impractical in real-world FL deployments, as it effectively requires botnet-like control over many devices. This approach is costly to maintain and highly vulnerable to detection. This context raises a fundamental question: Can model poisoning attacks remain effective without any communication between attackers? To address this challenge, we introduce and formalize the \textbf{non-collusive attack model}, in which all compromised clients share a common adversarial objective but operate independently. Under this model, each attacker generates its malicious update without communicating with other adversaries, accessing other clients' updates, or relying on any knowledge of server-side defenses. To demonstrate the feasibility of this threat model, we propose \textbf{XFED}, the first aggregation-agnostic, non-collusive model poisoning attack. Our empirical evaluation across six benchmark datasets shows that XFED bypasses eight state-of-the-art defenses and outperforms six existing model poisoning attacks. These findings indicate that FL systems are substantially less secure than previously believed and underscore the urgent need for more robust and practical defense mechanisms.


Key Contributions

  • First non-collusive model poisoning attack where adversarial clients operate independently without inter-attacker communication
  • XFED attack bypasses 8 Byzantine-robust FL defenses and outperforms 6 existing poisoning attacks
  • Demonstrates FL systems are less secure than believed since attacks don't require botnet-level coordination

🛡️ Threat Analysis

Data Poisoning Attack

Attackers manipulate training by sending malicious model updates to corrupt the global federated model — this is data/model poisoning at training time.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timegrey_box
Datasets
six benchmark datasets (specific names not mentioned in excerpt)
Applications
federated learning