Beyond Trade-offs: A Unified Framework for Privacy, Robustness, and Communication Efficiency in Federated Learning
Yue Xia 1, Tayyebeh Jahani-Nezhad 2, Rawad Bitar 1
Published on arXiv
2508.12978
Data Poisoning Attack
OWASP ML Top 10 — ML02
Key Finding
RobAJoL outperforms existing communication-efficient and Byzantine-robust FL methods augmented with DP in both robustness and model utility across multiple Byzantine attack scenarios on CIFAR-10, Fashion MNIST, and FEMNIST.
RobAJoL
Novel technique introduced
We propose Fed-DPRoC, a novel federated learning framework designed to jointly provide differential privacy (DP), Byzantine robustness, and communication efficiency. Central to our approach is the concept of robust-compatible compression, which allows reducing the bi-directional communication overhead without undermining the robustness of the aggregation. We instantiate our framework as RobAJoL, which integrates the Johnson-Lindenstrauss (JL)-based compression mechanism with robust averaging for robustness. Our theoretical analysis establishes the compatibility of JL transform with robust averaging, ensuring that RobAJoL maintains robustness guarantees, satisfies DP, and substantially reduces communication overhead. We further present simulation results on CIFAR-10, Fashion MNIST, and FEMNIST, validating our theoretical claims. We compare RobAJoL with a state-of-the-art communication-efficient and robust FL scheme augmented with DP for a fair comparison, demonstrating that RobAJoL outperforms existing methods in terms of robustness and utility under different Byzantine attacks.
Key Contributions
- Introduces the concept of 'robust-compatible compression' that reduces bidirectional communication overhead without undermining Byzantine-robust aggregation
- Proposes RobAJoL, instantiating the framework with Johnson-Lindenstrauss compression + robust averaging, with theoretical guarantees for DP, robustness, and communication efficiency simultaneously
- Empirically demonstrates that RobAJoL outperforms state-of-the-art communication-efficient and robust FL baselines (augmented with DP) under multiple Byzantine attack types
🛡️ Threat Analysis
The paper's core security contribution is defending against Byzantine clients in federated learning — malicious participants who manipulate their gradient updates to degrade the global model. RobAJoL proposes a Byzantine-fault-tolerant aggregation scheme and evaluates it under multiple Byzantine attack scenarios, which is the canonical ML02 threat in federated settings.