defense 2025

TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning

Runhua Xu 1,2, Bo Li 1,2, Chao Li 3, James B.D. Joshi 4, Shuai Ma 1, Jianxin Li 1,2

0 citations · in IEEE Transactions on Depend...

α

Published on arXiv

2501.05053

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

TAPFed reduces transmission overhead by 29–45% versus baselines while providing formal security guarantees against gradient inference (disaggregation) attacks from malicious aggregators, which the majority of existing approaches cannot withstand.

TAPFed

Novel technique introduced


Federated learning is a computing paradigm that enhances privacy by enabling multiple parties to collaboratively train a machine learning model without revealing personal data. However, current research indicates that traditional federated learning platforms are unable to ensure privacy due to privacy leaks caused by the interchange of gradients. To achieve privacy-preserving federated learning, integrating secure aggregation mechanisms is essential. Unfortunately, existing solutions are vulnerable to recently demonstrated inference attacks such as the disaggregation attack. This paper proposes TAPFed, an approach for achieving privacy-preserving federated learning in the context of multiple decentralized aggregators with malicious actors. TAPFed uses a proposed threshold functional encryption scheme and allows for a certain number of malicious aggregators while maintaining security and privacy. We provide formal security and privacy analyses of TAPFed and compare it to various baselines through experimental evaluation. Our results show that TAPFed offers equivalent performance in terms of model quality compared to state-of-the-art approaches while reducing transmission overhead by 29%-45% across different model training scenarios. Most importantly, TAPFed can defend against recently demonstrated inference attacks caused by curious aggregators, which the majority of existing approaches are susceptible to.


Key Contributions

  • Proposes a threshold functional encryption scheme for privacy-preserving federated learning that tolerates a bounded number of malicious aggregators without requiring honest-but-curious or peer-to-peer trust assumptions
  • Defends against disaggregation and gradient inference attacks from curious aggregators that defeat most existing secure aggregation approaches
  • Reduces transmission overhead by 29–45% compared to state-of-the-art baselines while maintaining equivalent model quality

🛡️ Threat Analysis

Model Inversion Attack

The paper's primary threat model is curious/malicious aggregators reconstructing private training data from shared gradients (gradient inference / disaggregation attacks). TAPFed's threshold functional encryption scheme is specifically designed to prevent this data reconstruction from observed gradients — the canonical ML03 threat in federated learning.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timewhite_box
Applications
federated learningprivacy-preserving model training