defense 2025

SketchGuard: Scaling Byzantine-Robust Decentralized Federated Learning via Sketch-Based Screening

Murtaza Rangwala 1, Farag Azzedin 1,2, Richard O. Sinnott 1, Rajkumar Buyya 1

1 citations · 35 references · arXiv

α

Published on arXiv

2510.07922

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

SketchGuard achieves equivalent Byzantine robustness to BALANCE aggregation (TER deviation ≤0.5pp) while reducing communication by 50–70% and computation by up to 82% via Count Sketch-based neighbor screening.

SketchGuard

Novel technique introduced


Decentralized Federated Learning enables privacy-preserving collaborative training without centralized servers but remains vulnerable to Byzantine attacks. Existing defenses require exchanging high-dimensional model vectors with all neighbors each round, creating prohibitive costs at scale. We propose SketchGuard, which decouples Byzantine filtering from aggregation via sketch-based screening. SketchGuard compresses $d$-dimensional models to $k$-dimensional sketches ($k \ll d$) using Count Sketch, then fetches full models only from accepted neighbors, reducing communication complexity from $O(d|N_i|)$ to $O(k|N_i| + d|S_i|)$, where $|N_i|$ is the neighbor count and $|S_i| \le |N_i|$ is the accepted count. We prove convergence in strongly convex and non-convex settings, showing that approximation errors introduce only a $(1+O(ε))$ factor in the effective threshold. Experiments demonstrate SketchGuard maintains state-of-the-art robustness (mean TER deviation $\leq$0.5 percentage points) while reducing computation by up to 82% and communication by 50-70%.


Key Contributions

  • Count Sketch compression decouples Byzantine filtering from aggregation, reducing communication complexity from O(d|N_i|) to O(k|N_i| + d|S_i|) by screening neighbors on compressed sketches before fetching full models
  • Rigorous convergence proofs in strongly convex and non-convex settings showing sketch approximation errors introduce only a (1+O(ε)) factor in the effective filtering threshold
  • Empirical validation demonstrating up to 82% computation reduction and 50–70% communication reduction while maintaining state-of-the-art Byzantine robustness (mean TER deviation ≤0.5 percentage points)

🛡️ Threat Analysis

Data Poisoning Attack

SketchGuard defends against Byzantine clients sending arbitrary or crafted model updates to degrade global model performance in decentralized FL — the canonical ML02 threat in federated settings. The defense mechanism (local-consistency filtering via sketch-based screening) is a robust aggregation technique directly targeting Byzantine poisoning attacks.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timeuntargeted
Applications
decentralized federated learningcollaborative model training