defense arXiv Sep 5, 2025 · Sep 2025
Simone Bottoni, Giulio Zizzo, Stefano Braghin et al. · University of Insubria · IBM Research Europe
Homomorphic authenticator protocol lets FL clients cryptographically verify aggregator honesty without revealing individual model updates
Data Poisoning Attack federated-learning
Federated Learning has rapidly expanded from its original inception to now have a large body of research, several frameworks, and sold in a variety of commercial offerings. Thus, its security and robustness is of significant importance. There are many algorithms that provide robustness in the case of malicious clients. However, the aggregator itself may behave maliciously, for example, by biasing the model or tampering with the weights to weaken the models privacy. In this work, we introduce a verifiable federated learning protocol that enables clients to verify the correctness of the aggregators computation without compromising the confidentiality of their updates. Our protocol uses a standard secure aggregation technique to protect individual model updates with a linearly homomorphic authenticator scheme that enables efficient, privacy-preserving verification of the aggregated result. Our construction ensures that clients can detect manipulation by the aggregator while maintaining low computational overhead. We demonstrate that our approach scales to large models, enabling verification over large neural networks with millions of parameters.
federated University of Insubria · IBM Research Europe