defense 2026

Robust Federated Learning via Byzantine Filtering over Encrypted Updates

Adda Akram Bendoukha 1, Aymen Boudguiga 2, Nesrine Kaaniche 1, Renaud Sirdey 2, Didem Demirag 3, Sébastien Gambs 3

0 citations · 99 references · arXiv (Cornell University)

α

Published on arXiv

2602.05410

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Poisoning

OWASP ML Top 10 — ML10

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

SVM-based Byzantine filtering achieves 90–94% accuracy in identifying malicious FL updates over CKKS-encrypted gradients with encrypted inference runtimes of 6–24 seconds and marginal loss in model utility.

Byzantine Filtering over Encrypted Updates (BF-HE)

Novel technique introduced


Federated Learning (FL) aims to train a collaborative model while preserving data privacy. However, the distributed nature of this approach still raises privacy and security issues, such as the exposure of sensitive data due to inference attacks and the influence of Byzantine behaviors on the trained model. In particular, achieving both secure aggregation and Byzantine resilience remains challenging, as existing solutions often address these aspects independently. In this work, we propose to address these challenges through a novel approach that combines homomorphic encryption for privacy-preserving aggregation with property-inference-inspired meta-classifiers for Byzantine filtering. First, following the property-inference attacks blueprint, we train a set of filtering meta-classifiers on labeled shadow updates, reproducing a diverse ensemble of Byzantine misbehaviors in FL, including backdoor, gradient-inversion, label-flipping and shuffling attacks. The outputs of these meta-classifiers are then used to cancel the Byzantine encrypted updates by reweighting. Second, we propose an automated method for selecting the optimal kernel and the dimensionality hyperparameters with respect to homomorphic inference, aggregation constraints and efficiency over the CKKS cryptosystem. Finally, we demonstrate through extensive experiments the effectiveness of our approach against Byzantine participants on the FEMNIST, CIFAR10, GTSRB, and acsincome benchmarks. More precisely, our SVM filtering achieves accuracies between $90$% and $94$% for identifying Byzantine updates at the cost of marginal losses in model utility and encrypted inference runtimes ranging from $6$ to $24$ seconds and from $9$ to $26$ seconds for an overall aggregation.


Key Contributions

  • Property-inference-inspired SVM meta-classifiers trained on shadow FL updates to detect and filter Byzantine encrypted updates (backdoor, label-flipping, gradient-inversion, shuffling), achieving 90–94% filtering accuracy.
  • Automated hyperparameter selection method for CKKS homomorphic encryption optimizing kernel and dimensionality for efficient encrypted inference and aggregation.
  • Unified framework combining Byzantine resilience and privacy-preserving aggregation, demonstrated on FEMNIST, CIFAR-10, GTSRB, and ACSIncome benchmarks.

🛡️ Threat Analysis

Data Poisoning Attack

Directly defends against Byzantine participants in FL whose primary goal is degrading model performance via label-flipping and shuffling attacks — the canonical ML02 threat for federated learning.

Model Inversion Attack

Homomorphic encryption (CKKS) is a primary co-contribution of the paper, explicitly protecting FL gradient updates from gradient-inversion/reconstruction attacks — a core ML03 defense, and the paper proposes an automated CKKS hyperparameter selection method as a novel contribution.

Model Poisoning

Backdoor attacks are explicitly listed as one of the Byzantine misbehaviors the meta-classifier filtering is trained to detect and neutralize in the FL setting.


Details

Domains
federated-learning
Model Types
federatedcnntraditional_ml
Threat Tags
training_timegrey_box
Datasets
FEMNISTCIFAR-10GTSRBACSIncome
Applications
federated learningimage classification