DictPFL: Efficient and Private Federated Learning on Encrypted Gradients
Jiaqi Xue 1, Mayank Kumar 1, Yuzhang Shang 1, Shangqian Gao 2, Rui Ning 3, Mengxin Zheng 1, Xiaoqian Jiang 4, Qian Lou 1
Published on arXiv
2510.21086
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
DictPFL reduces communication cost by 402–748× and training time by 28–65× versus fully encrypted FL, while achieving runtime within 2× of plaintext FL with complete gradient protection.
DictPFL
Novel technique introduced
Federated Learning (FL) enables collaborative model training across institutions without sharing raw data. However, gradient sharing still risks privacy leakage, such as gradient inversion attacks. Homomorphic Encryption (HE) can secure aggregation but often incurs prohibitive computational and communication overhead. Existing HE-based FL methods sit at two extremes: encrypting all gradients for full privacy at high cost, or partially encrypting gradients to save resources while exposing vulnerabilities. We present DictPFL, a practical framework that achieves full gradient protection with minimal overhead. DictPFL encrypts every transmitted gradient while keeping non-transmitted parameters local, preserving privacy without heavy computation. It introduces two key modules: Decompose-for-Partial-Encrypt (DePE), which decomposes model weights into a static dictionary and an updatable lookup table, only the latter is encrypted and aggregated, while the static dictionary remains local and requires neither sharing nor encryption; and Prune-for-Minimum-Encrypt (PrME), which applies encryption-aware pruning to minimize encrypted parameters via consistent, history-guided masks. Experiments show that DictPFL reduces communication cost by 402-748$\times$ and accelerates training by 28-65$\times$ compared to fully encrypted FL, while outperforming state-of-the-art selective encryption methods by 51-155$\times$ in overhead and 4-19$\times$ in speed. Remarkably, DictPFL's runtime is within 2$\times$ of plaintext FL, demonstrating for the first time, that HE-based private federated learning is practical for real-world deployment. The code is publicly available at https://github.com/UCF-ML-Research/DictPFL.
Key Contributions
- Decompose-for-Partial-Encrypt (DePE) module that separates model weights into a static local dictionary and an updatable lookup table, encrypting only the lookup table to minimize HE overhead
- Prune-for-Minimum-Encrypt (PrME) module that applies encryption-aware, history-guided pruning masks to further reduce the volume of encrypted parameters
- First demonstration that HE-based FL is practically deployable, achieving runtime within 2× of plaintext FL while providing full gradient protection against gradient inversion attacks
🛡️ Threat Analysis
The paper explicitly defends against gradient inversion attacks in federated learning, where an adversary reconstructs private training data from shared gradients. Per the ML03 rules, secure aggregation protocols for FL that defend against gradient leakage qualify as ML03 even when the primary contribution is a systems optimization — and the paper explicitly cites gradient inversion as its threat model.