Towards Privacy-Preserving Federated Learning using Hybrid Homomorphic Encryption
Ivan Costa , Pedro Correia , Ivone Amorim , Eva Maia , Isabel Praça
Published on arXiv
2603.26417
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
Both masking and RSA encapsulation preserve model accuracy while adding minimal overhead: masking has negligible cost, RSA adds modest runtime/communication overhead
HHE-FL with Key Protection
Novel technique introduced
Federated Learning (FL) enables collaborative training while keeping sensitive data on clients' devices, but local model updates can still leak private information. Hybrid Homomorphic Encryption (HHE) has recently been applied to FL to mitigate client overhead while preserving privacy. However, existing HHE-FL systems rely on a single homomorphic key pair shared across all clients, which forces them to assume an unrealistically weak threat model: if a client misbehaves or intercepts another's traffic, private updates can be exposed. We eliminate this weakness by integrating two alternative key protection mechanisms into the HHE-FL workflow. The first is masking, where client keys are blinded before homomorphic encryption and later unblinded homomorphically by the server. The second is RSA encapsulation, where homomorphically encrypted keys are additionally wrapped under the server's RSA public key. These countermeasures prevent key misuse by other clients and extend HHE-FL security to adversarial settings with malicious participants. We implement both approaches on top of the Flower framework using the PASTA/BFV HHE scheme and evaluate them on the MNIST dataset with 12 clients. Results show that both mechanisms preserve model accuracy while adding minimal overhead: masking incurs negligible cost, and RSA encapsulation introduces only modest runtime and communication overhead.
Key Contributions
- Masking mechanism that blinds client keys before homomorphic encryption to prevent key misuse
- RSA encapsulation approach wrapping encrypted keys under server's public key for additional protection
- Security analysis showing both defenses extend HHE-FL to malicious participant settings
🛡️ Threat Analysis
Defends against gradient leakage attacks in federated learning where malicious clients intercept or misuse shared homomorphic keys to reconstruct other clients' private training data from encrypted updates.