defense 2025

Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning

Baolei Zhang 1, Minghong Fang 2, Zhuqing Liu 3, Biao Yi 1, Peizhao Zhou 1, Yuan Wang 1, Tong Li 1, Zheli Liu 1

1 citations · 74 references · TIFS

α

Published on arXiv

2512.17254

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

ABBR runs significantly faster and with minimal communication overhead compared to existing defenses while maintaining nearly the same Byzantine-resilience.

ABBR

Novel technique introduced


Federated Learning (FL) allows multiple clients to collaboratively train a model without sharing their private data. However, FL is vulnerable to Byzantine attacks, where adversaries manipulate client models to compromise the federated model, and privacy inference attacks, where adversaries exploit client models to infer private data. Existing defenses against both backdoor and privacy inference attacks introduce significant computational and communication overhead, creating a gap between theory and practice. To address this, we propose ABBR, a practical framework for Byzantine-robust and privacy-preserving FL. We are the first to utilize dimensionality reduction to speed up the private computation of complex filtering rules in privacy-preserving FL. Additionally, we analyze the accuracy loss of vector-wise filtering in low-dimensional space and introduce an adaptive tuning strategy to minimize the impact of malicious models that bypass filtering on the global model. We implement ABBR with state-of-the-art Byzantine-robust aggregation rules and evaluate it on public datasets, showing that it runs significantly faster, has minimal communication overhead, and maintains nearly the same Byzantine-resilience as the baselines.


Key Contributions

  • First use of dimensionality reduction to speed up private computation of complex Byzantine-filtering rules in privacy-preserving FL, substantially reducing overhead.
  • Theoretical analysis of accuracy loss from vector-wise filtering in low-dimensional space and an adaptive tuning strategy to minimize the impact of malicious models that bypass filtering.
  • ABBR framework integrating state-of-the-art Byzantine-robust aggregation rules with privacy-preserving computation, achieving near-baseline Byzantine-resilience with significantly lower cost.

🛡️ Threat Analysis

Data Poisoning Attack

Core contribution includes Byzantine-robust aggregation rules in FL, where malicious clients manipulate their model updates to degrade or compromise the global federated model — the canonical ML02 threat in federated settings.

Model Inversion Attack

The paper explicitly defends against privacy inference attacks where adversaries exploit client model updates to reconstruct private training data; the dimensionality-reduction approach accelerates privacy-preserving computation that hides individual gradients from inference by the server or other participants.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timegrey_box
Applications
federated learningcollaborative model training