defense 2025

FedGreed: A Byzantine-Robust Loss-Based Aggregation Method for Federated Learning

Emmanouil Kritharakis 1, Antonios Makris 1, Dusan Jakovetic 2, Konstantinos Tserpes 1

0 citations

α

Published on arXiv

2508.18060

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

FedGreed outperforms standard and robust FL aggregation baselines (Krum, Multi-Krum, Trimmed Mean, Median) in the majority of Byzantine adversarial scenarios tested, including label flipping and Gaussian noise injection.

FedGreed

Novel technique introduced


Federated Learning (FL) enables collaborative model training across multiple clients while preserving data privacy by keeping local datasets on-device. In this work, we address FL settings where clients may behave adversarially, exhibiting Byzantine attacks, while the central server is trusted and equipped with a reference dataset. We propose FedGreed, a resilient aggregation strategy for federated learning that does not require any assumptions about the fraction of adversarial participants. FedGreed orders clients' local model updates based on their loss metrics evaluated against a trusted dataset on the server and greedily selects a subset of clients whose models exhibit the minimal evaluation loss. Unlike many existing approaches, our method is designed to operate reliably under heterogeneous (non-IID) data distributions, which are prevalent in real-world deployments. FedGreed exhibits convergence guarantees and bounded optimality gaps under strong adversarial behavior. Experimental evaluations on MNIST, FMNIST, and CIFAR-10 demonstrate that our method significantly outperforms standard and robust federated learning baselines, such as Mean, Trimmed Mean, Median, Krum, and Multi-Krum, in the majority of adversarial scenarios considered, including label flipping and Gaussian noise injection attacks. All experiments were conducted using the Flower federated learning framework.


Key Contributions

  • FedGreed: a greedy loss-based aggregation strategy that selects clients with minimal evaluation loss on a server-side reference dataset, requiring no assumption on the fraction of Byzantine participants
  • Convergence guarantees and bounded optimality gaps under strong adversarial behavior with non-IID data distributions
  • Empirical demonstration of superiority over Mean, Trimmed Mean, Median, Krum, and Multi-Krum baselines across label flipping and Gaussian noise injection attacks on MNIST, FMNIST, and CIFAR-10

🛡️ Threat Analysis

Data Poisoning Attack

Defends against Byzantine attacks in federated learning where malicious clients submit poisoned model updates (label flipping, Gaussian noise injection) to degrade global model performance — the canonical FL data poisoning threat. FedGreed is a robust aggregation defense that selects clients by minimal evaluation loss on a trusted reference dataset.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
training_timeuntargeted
Datasets
MNISTFMNISTCIFAR-10
Applications
federated learningimage classification