defense arXiv Nov 4, 2025 · Nov 2025
Lihan Xu, Yanjie Dong, Gang Wang et al. · Shenzhen MSU-BIT University · Beijing Institute of Technology
Defends federated learning from Byzantine adversaries by combining Nesterov momentum with robust aggregation for faster convergent training
Data Poisoning Attack federated-learning
We investigate robust federated learning, where a group of workers collaboratively train a shared model under the orchestration of a central server in the presence of Byzantine adversaries capable of arbitrary and potentially malicious behaviors. To simultaneously enhance communication efficiency and robustness against such adversaries, we propose a Byzantine-resilient Nesterov-Accelerated Federated Learning (Byrd-NAFL) algorithm. Byrd-NAFL seamlessly integrates Nesterov's momentum into the federated learning process alongside Byzantine-resilient aggregation rules to achieve fast and safeguarding convergence against gradient corruption. We establish a finite-time convergence guarantee for Byrd-NAFL under non-convex and smooth loss functions with relaxed assumption on the aggregated gradients. Extensive numerical experiments validate the effectiveness of Byrd-NAFL and demonstrate the superiority over existing benchmarks in terms of convergence speed, accuracy, and resilience to diverse Byzantine attack strategies.
federated Shenzhen MSU-BIT University · Beijing Institute of Technology