defense 2025

Advancing Practical Homomorphic Encryption for Federated Learning: Theoretical Guarantees and Efficiency Optimizations

Ren-Yi Huang 1, Dumindu Samaraweera 2, Prashant Shekhar 2, J. Morris Chang 1

1 citations · 43 references · arXiv

α

Published on arXiv

2509.20476

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

The BCRLB framework theoretically characterizes the minimum encryption ratio required to achieve a target level of resistance against gradient reconstruction, enabling principled, computationally efficient selective encryption design.

Selective Homomorphic Encryption with BCRLB Analysis

Novel technique introduced


Federated Learning (FL) enables collaborative model training while preserving data privacy by keeping raw data locally stored on client devices, preventing access from other clients or the central server. However, recent studies reveal that sharing model gradients creates vulnerability to Model Inversion Attacks, particularly Deep Leakage from Gradients (DLG), which reconstructs private training data from shared gradients. While Homomorphic Encryption has been proposed as a promising defense mechanism to protect gradient privacy, fully encrypting all model gradients incurs high computational overhead. Selective encryption approaches aim to balance privacy protection with computational efficiency by encrypting only specific gradient components. However, the existing literature largely overlooks a theoretical exploration of the spectral behavior of encrypted versus unencrypted parameters, relying instead primarily on empirical evaluations. To address this gap, this paper presents a framework for theoretical analysis of the underlying principles of selective encryption as a defense against model inversion attacks. We then provide a comprehensive empirical study that identifies and quantifies the critical factors, such as model complexity, encryption ratios, and exposed gradients, that influence defense effectiveness. Our theoretical framework clarifies the relationship between gradient selection and privacy preservation, while our experimental evaluation demonstrates how these factors shape the robustness of defenses against model inversion attacks. Collectively, these contributions advance the understanding of selective encryption mechanisms and offer principled guidance for designing efficient, scalable, privacy-preserving federated learning systems.


Key Contributions

  • Novel theoretical framework using the Bayesian Cramér-Rao Lower Bound (BCRLB) to analyze the effectiveness of selective homomorphic encryption as a defense against gradient reconstruction attacks
  • Identification and quantification of key factors — encryption ratio, model complexity, and exposed gradient components — that govern defense robustness against DLG-style model inversion attacks
  • Comprehensive empirical validation that correlates the theoretical predictions with measured defense performance across varying encryption configurations

🛡️ Threat Analysis

Model Inversion Attack

The paper explicitly targets gradient leakage / Model Inversion Attacks (specifically DLG — Deep Leakage from Gradients), where an adversary reconstructs private training data from shared FL gradients. The proposed selective homomorphic encryption defense and its BCRLB-based theoretical framework are directly designed to raise the lower bound on reconstruction error for this specific adversarial threat.


Details

Domains
federated-learning
Model Types
federatedcnn
Threat Tags
white_boxtraining_time
Applications
federated learningprivacy-preserving distributed model training