defense arXiv Sep 11, 2025 · Sep 2025
Sena Ergisi, Luis Maßny, Rawad Bitar · Technical University of Munich
Defends federated learning from Byzantine attacks via dual gradient scoring on proximity and dissimilarity, robust under non-IID data
Data Poisoning Attack federated-learning
Federated Learning (FL) emerged as a widely studied paradigm for distributed learning. Despite its many advantages, FL remains vulnerable to adversarial attacks, especially under data heterogeneity. We propose a new Byzantine-robust FL algorithm called ProDiGy. The key novelty lies in evaluating the client gradients using a joint dual scoring system based on the gradients' proximity and dissimilarity. We demonstrate through extensive numerical experiments that ProDiGy outperforms existing defenses in various scenarios. In particular, when the clients' data do not follow an IID distribution, while other defense mechanisms fail, ProDiGy maintains strong defense capabilities and model accuracy. These findings highlight the effectiveness of a dual perspective approach that promotes natural similarity among honest clients while detecting suspicious uniformity as a potential indicator of an attack.
federated Technical University of Munich
defense arXiv Aug 18, 2025 · Aug 2025
Yue Xia, Tayyebeh Jahani-Nezhad, Rawad Bitar · Technical University of Munich · Technische Universität Berlin
Defends federated learning against Byzantine clients using JL-compression-compatible robust aggregation with differential privacy guarantees
Data Poisoning Attack federated-learning
We propose Fed-DPRoC, a novel federated learning framework designed to jointly provide differential privacy (DP), Byzantine robustness, and communication efficiency. Central to our approach is the concept of robust-compatible compression, which allows reducing the bi-directional communication overhead without undermining the robustness of the aggregation. We instantiate our framework as RobAJoL, which integrates the Johnson-Lindenstrauss (JL)-based compression mechanism with robust averaging for robustness. Our theoretical analysis establishes the compatibility of JL transform with robust averaging, ensuring that RobAJoL maintains robustness guarantees, satisfies DP, and substantially reduces communication overhead. We further present simulation results on CIFAR-10, Fashion MNIST, and FEMNIST, validating our theoretical claims. We compare RobAJoL with a state-of-the-art communication-efficient and robust FL scheme augmented with DP for a fair comparison, demonstrating that RobAJoL outperforms existing methods in terms of robustness and utility under different Byzantine attacks.
federated Technical University of Munich · Technische Universität Berlin