defense arXiv Sep 11, 2025 · Sep 2025
Sena Ergisi, Luis Maßny, Rawad Bitar · Technical University of Munich
Defends federated learning from Byzantine attacks via dual gradient scoring on proximity and dissimilarity, robust under non-IID data
Data Poisoning Attack federated-learning
Federated Learning (FL) emerged as a widely studied paradigm for distributed learning. Despite its many advantages, FL remains vulnerable to adversarial attacks, especially under data heterogeneity. We propose a new Byzantine-robust FL algorithm called ProDiGy. The key novelty lies in evaluating the client gradients using a joint dual scoring system based on the gradients' proximity and dissimilarity. We demonstrate through extensive numerical experiments that ProDiGy outperforms existing defenses in various scenarios. In particular, when the clients' data do not follow an IID distribution, while other defense mechanisms fail, ProDiGy maintains strong defense capabilities and model accuracy. These findings highlight the effectiveness of a dual perspective approach that promotes natural similarity among honest clients while detecting suspicious uniformity as a potential indicator of an attack.
federated Technical University of Munich