defense arXiv Oct 9, 2025 · Oct 2025
Murtaza Rangwala, Farag Azzedin, Richard O. Sinnott et al. · The University of Melbourne · King Fahd University of Petroleum and Minerals
Defends decentralized federated learning against Byzantine poisoning attacks using sketch-based neighbor screening to cut communication 50-70%
Data Poisoning Attack federated-learning
Decentralized Federated Learning enables privacy-preserving collaborative training without centralized servers but remains vulnerable to Byzantine attacks. Existing defenses require exchanging high-dimensional model vectors with all neighbors each round, creating prohibitive costs at scale. We propose SketchGuard, which decouples Byzantine filtering from aggregation via sketch-based screening. SketchGuard compresses $d$-dimensional models to $k$-dimensional sketches ($k \ll d$) using Count Sketch, then fetches full models only from accepted neighbors, reducing communication complexity from $O(d|N_i|)$ to $O(k|N_i| + d|S_i|)$, where $|N_i|$ is the neighbor count and $|S_i| \le |N_i|$ is the accepted count. We prove convergence in strongly convex and non-convex settings, showing that approximation errors introduce only a $(1+O(ε))$ factor in the effective threshold. Experiments demonstrate SketchGuard maintains state-of-the-art robustness (mean TER deviation $\leq$0.5 percentage points) while reducing computation by up to 82% and communication by 50-70%.
federated The University of Melbourne · King Fahd University of Petroleum and Minerals