defense 2025

Lossless Privacy-Preserving Aggregation for Decentralized Federated Learning

Xiaoye Miao , Bin Li , Yanzhang , Xinkui Zhao , Yangyang Wu

0 citations

α

Published on arXiv

2501.04409

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

LPPA provides √2 times greater privacy-preserving capacity than differential privacy noise addition while achieving model accuracy comparable to standard DFL without noise injection, with a 14% mean accuracy improvement over DP.

LPPA (Lossless Privacy-Preserving Aggregation)

Novel technique introduced


Privacy concerns arise as sensitive data proliferate. Despite decentralized federated learning (DFL) aggregating gradients from neighbors to avoid direct data transmission, it still poses indirect data leaks from the transmitted gradients. Existing privacy-preserving methods for DFL add noise to gradients. They either diminish the model predictive accuracy or suffer from ineffective gradient protection. In this paper, we propose a novel lossless privacy-preserving aggregation rule named LPPA to enhance gradient protection as much as possible but without loss of DFL model predictive accuracy. LPPA subtly injects the noise difference between the sent and received noise into transmitted gradients for gradient protection. The noise difference incorporates neighbors' randomness for each client, effectively safeguarding against data leaks. LPPA employs the noise flow conservation theory to ensure that the noise impact can be globally eliminated. The global sum of all noise differences remains zero, ensuring that accurate gradient aggregation is unaffected and the model accuracy remains intact. We theoretically prove that the privacy-preserving capacity of LPPA is \sqrt{2} times greater than that of noise addition, while maintaining comparable model accuracy to the standard DFL aggregation without noise injection. Experimental results verify the theoretical findings and show that LPPA achieves a 14% mean improvement in accuracy over noise addition. We also demonstrate the effectiveness of LPPA in protecting raw data and guaranteeing lossless model accuracy.


Key Contributions

  • LPPA aggregation rule that injects the noise difference between sent and received noise into gradients, incorporating neighbors' randomness to obscure local gradients without degrading model accuracy
  • Noise flow conservation theory proving that the global sum of all noise differences equals zero, ensuring accurate gradient aggregation and lossless model accuracy
  • Theoretical proof that LPPA's privacy-preserving capacity is √2 times greater than standard DP noise addition, with 14% mean accuracy improvement over DP in experiments

🛡️ Threat Analysis

Model Inversion Attack

The threat model is an adversarial DFL client that reconstructs a neighbor's raw training data from received gradients (data reconstruction / gradient inversion attack). LPPA is a secure aggregation defense that injects noise differences to obscure gradients while using noise flow conservation to cancel the noise globally, directly defending against gradient-based training data reconstruction in federated learning.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timewhite_box
Applications
decentralized federated learningedge computingdistributed machine learning