defense 2025

Efficient Byzantine-Robust Privacy-Preserving Federated Learning via Dimension Compression

Xian Qin , Xue Yang , Xiaohu Tang

0 citations

α

Published on arXiv

2509.11870

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Defends against Byzantine clients comprising up to 40% of the network with 25–35x computational overhead reduction compared to ShieldFL while maintaining equivalent privacy guarantees.

JL-PPFL (Johnson-Lindenstrauss Privacy-Preserving Federated Learning)

Novel technique introduced


Federated Learning (FL) allows collaborative model training across distributed clients without sharing raw data, thus preserving privacy. However, the system remains vulnerable to privacy leakage from gradient updates and Byzantine attacks from malicious clients. Existing solutions face a critical trade-off among privacy preservation, Byzantine robustness, and computational efficiency. We propose a novel scheme that effectively balances these competing objectives by integrating homomorphic encryption with dimension compression based on the Johnson-Lindenstrauss transformation. Our approach employs a dual-server architecture that enables secure Byzantine defense in the ciphertext domain while dramatically reducing computational overhead through gradient compression. The dimension compression technique preserves the geometric relationships necessary for Byzantine defence while reducing computation complexity from $O(dn)$ to $O(kn)$ cryptographic operations, where $k \ll d$. Extensive experiments across diverse datasets demonstrate that our approach maintains model accuracy comparable to non-private FL while effectively defending against Byzantine clients comprising up to $40\%$ of the network.


Key Contributions

  • Novel dual-server FL architecture combining Johnson-Lindenstrauss dimension compression with additive masking and Paillier homomorphic encryption to simultaneously achieve Byzantine robustness and gradient privacy.
  • Dimension compression reduces cryptographic computation complexity from O(dn) to O(kn), achieving 25–35x computational and 17x communication overhead reduction over the non-compressed baseline.
  • Maintains Byzantine robustness comparable to plaintext FLTrust and privacy guarantees comparable to ShieldFL, making secure FL practical for large-scale neural networks.

🛡️ Threat Analysis

Data Poisoning Attack

Core contribution is defending against Byzantine clients sending corrupted gradient updates to degrade the global model — the paper explicitly targets Byzantine-fault-tolerant aggregation in FL, defending against malicious participants that constitute up to 40% of the network.

Model Inversion Attack

The privacy preservation axis of the paper directly addresses gradient leakage / reconstruction attacks (citing DLG and similar works), using homomorphic encryption and additive masking to prevent adversaries from inferring sensitive training data from shared gradients — a classic gradient inversion defense.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timegrey_box
Applications
federated learningdistributed model trainingedge device collaboration