defense 2025

Verifiability and Privacy in Federated Learning through Context-Hiding Multi-Key Homomorphic Authenticators

Simone Bottoni 1, Giulio Zizzo 2, Stefano Braghin 2, Alberto Trombetta 1

0 citations

α

Published on arXiv

2509.05162

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Clients can cryptographically detect aggregator weight tampering or biasing while preserving individual update confidentiality, with the scheme scaling to large models.

Context-Hiding Multi-Key Homomorphic Authenticators

Novel technique introduced


Federated Learning has rapidly expanded from its original inception to now have a large body of research, several frameworks, and sold in a variety of commercial offerings. Thus, its security and robustness is of significant importance. There are many algorithms that provide robustness in the case of malicious clients. However, the aggregator itself may behave maliciously, for example, by biasing the model or tampering with the weights to weaken the models privacy. In this work, we introduce a verifiable federated learning protocol that enables clients to verify the correctness of the aggregators computation without compromising the confidentiality of their updates. Our protocol uses a standard secure aggregation technique to protect individual model updates with a linearly homomorphic authenticator scheme that enables efficient, privacy-preserving verification of the aggregated result. Our construction ensures that clients can detect manipulation by the aggregator while maintaining low computational overhead. We demonstrate that our approach scales to large models, enabling verification over large neural networks with millions of parameters.


Key Contributions

  • Verifiable federated learning protocol using linearly homomorphic authenticators that enables clients to detect malicious aggregator computation without revealing individual updates
  • Context-hiding multi-key homomorphic authenticator construction supporting privacy-preserving verification under standard secure aggregation
  • Demonstrated scalability to large neural networks with millions of parameters at low computational overhead

🛡️ Threat Analysis

Data Poisoning Attack

Defends against a malicious aggregator in federated learning who performs incorrect or biased aggregation — analogous to a Byzantine training-time poisoning attack but from the server/aggregator rather than clients. The paper's verification scheme enables clients to detect such aggregator-side manipulation of the aggregated model, fulfilling ML02's 'robust aggregation' defense role via cryptographic means.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_time
Applications
federated learningdistributed neural network training