defense 2025

Privacy-Preserving Federated Learning from Partial Decryption Verifiable Threshold Multi-Client Functional Encryption

Minjie Wang 1, Jinguang Han 1, Weizhi Meng 2

0 citations · 31 references · arXiv

α

Published on arXiv

2511.12936

Model Inversion Attack

OWASP ML Top 10 — ML03

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

VTSAFL achieves equivalent model accuracy to prior schemes while reducing total training time by over 40% and communication overhead by up to 50% on MNIST.

VTSAFL

Novel technique introduced


In federated learning, multiple parties can cooperate to train the model without directly exchanging their own private data, but the gradient leakage problem still threatens the privacy security and model integrity. Although the existing scheme uses threshold cryptography to mitigate the inference attack, it can not guarantee the verifiability of the aggregation results, making the system vulnerable to the threat of poisoning attack. We construct a partial decryption verifiable threshold multi client function encryption scheme, and apply it to Federated learning to implement the federated learning verifiable threshold security aggregation protocol (VTSAFL). VTSAFL empowers clients to verify aggregation results, concurrently minimizing both computational and communication overhead. The size of the functional key and partial decryption results of the scheme are constant, which provides efficiency guarantee for large-scale deployment. The experimental results on MNIST dataset show that vtsafl can achieve the same accuracy as the existing scheme, while reducing the total training time by more than 40%, and reducing the communication overhead by up to 50%. This efficiency is critical for overcoming the resource constraints inherent in Internet of Things (IoT) devices.


Key Contributions

  • Partial decryption verifiable threshold multi-client functional encryption (MCFE) scheme with DLEQ correctness proofs, enabling clients to verify aggregator computations
  • VTSAFL protocol integrating threshold encryption and verifiability into federated learning, simultaneously defending against gradient leakage and malicious aggregator poisoning
  • Constant-size functional keys and partial decryption results, reducing training time by 40% and communication overhead by 50% versus prior threshold MCFE-based FL schemes

🛡️ Threat Analysis

Data Poisoning Attack

Secondary threat addressed is poisoning by adversarial aggregators who may tamper with aggregation results to corrupt the global model. The DLEQ-based verification mechanism allows clients to detect and discard maliciously altered aggregation outputs, defending against this form of Byzantine/poisoning attack in the FL aggregation step.

Model Inversion Attack

Primary motivation is the gradient leakage problem: adversaries can reconstruct clients' private training data from shared gradients (citing Zhu et al. and Geiping et al. gradient inversion attacks). The threshold MCFE scheme ensures no single aggregator can access individual client gradients, directly defending against data reconstruction from gradients in FL.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timegrey_box
Datasets
MNIST
Applications
federated learningiot device training