defense 2026

Balancing Privacy-Quality-Efficiency in Federated Learning through Round-Based Interleaving of Protection Techniques

Yenan Wang , Carla Fabiana Chiasserini , Elad Michael Schiller

0 citations

α

Published on arXiv

2603.05158

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Privacy Interleaving (PI) achieves the most balanced privacy-quality-efficiency trade-offs at high protection levels, outperforming baselines that apply DP and HE in every round, while DP-based interleaving is preferable at intermediate privacy requirements.

Alt-FL (Privacy Interleaving / Synthetic Interleaving)

Novel technique introduced


In federated learning (FL), balancing privacy protection, learning quality, and efficiency remains a challenge. Privacy protection mechanisms, such as Differential Privacy (DP), degrade learning quality, or, as in the case of Homomorphic Encryption (HE), incur substantial system overhead. To address this, we propose Alt-FL, a privacy-preserving FL framework that combines DP, HE, and synthetic data via a novel round-based interleaving strategy. Alt-FL introduces three new methods, Privacy Interleaving (PI), Synthetic Interleaving with DP (SI/DP), and Synthetic Interleaving with HE (SI/HE), that enable flexible quality-efficiency trade-offs while providing privacy protection. We systematically evaluate Alt-FL against representative reconstruction attacks, including Deep Leakage from Gradients, Inverting Gradients, When the Curious Abandon Honesty, and Robbing the Fed, using a LeNet-5 model on CIFAR-10 and Fashion-MNIST. To enable fair comparison between DP- and HE-based defenses, we introduce a new attacker-centric framework that compares empirical attack success rates across the three proposed interleaving methods. Our results show that, for the studied attacker model and dataset, PI achieves the most balanced trade-offs at high privacy protection levels, while DP-based methods are preferable at intermediate privacy requirements. We also discuss how such results can be the basis for selecting privacy-preserving FL methods under varying privacy and resource constraints.


Key Contributions

  • Alt-FL framework that interleaves DP, Selective Homomorphic Encryption, and synthetic data across FL rounds via three novel strategies (PI, SI/DP, SI/HE) to balance the privacy-quality-efficiency trade-off
  • Attacker-centric evaluation framework that defines empirical privacy protection levels based on attack success rates, enabling fair comparison of DP- and HE-based defenses against four state-of-the-art gradient reconstruction attacks
  • Empirical analysis showing PI achieves the best trade-offs at high privacy requirements while DP-based methods are preferable at intermediate privacy levels

🛡️ Threat Analysis

Model Inversion Attack

The paper's primary security contribution is defending against gradient inversion / data reconstruction attacks (DLG, Inverting Gradients, CAH, Robbing the Fed) where an adversary reconstructs clients' private training data from shared FL gradients. Alt-FL's DP/HE/synthetic-data interleaving is explicitly evaluated by measuring empirical attack success rates of these reconstruction adversaries.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
white_boxtraining_time
Datasets
CIFAR-10Fashion-MNIST
Applications
federated learningprivacy-critical ml (healthcare, banking)