Perfectly-Private Analog Secure Aggregation in Federated Learning
Delio Jaramillo-Velez 1, Charul Rajput 2, Ragnar Freij-Hollanti 2, Camilla Hollanti 2, Alexandre Graell i Amat 1
Published on arXiv
2509.08683
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
The torus-based protocol achieves perfect privacy against gradient leakage while matching non-secure aggregation accuracy, and in some cases significantly outperforms finite-field secure aggregation in model accuracy and cosine similarity.
Torus-based Secure Aggregation
Novel technique introduced
In federated learning, multiple parties train models locally and share their parameters with a central server, which aggregates them to update a global model. To address the risk of exposing sensitive data through local models, secure aggregation via secure multiparty computation has been proposed to enhance privacy. At the same time, perfect privacy can only be achieved by a uniform distribution of the masked local models to be aggregated. This raises a problem when working with real valued data, as there is no measure on the reals that is invariant under the masking operation, and hence information leakage is bound to occur. Shifting the data to a finite field circumvents this problem, but as a downside runs into an inherent accuracy complexity tradeoff issue due to fixed point modular arithmetic as opposed to floating point numbers that can simultaneously handle numbers of varying magnitudes. In this paper, a novel secure parameter aggregation method is proposed that employs the torus rather than a finite field. This approach guarantees perfect privacy for each party's data by utilizing the uniform distribution on the torus, while avoiding accuracy losses. Experimental results show that the new protocol performs similarly to the model without secure aggregation while maintaining perfect privacy. Compared to the finite field secure aggregation, the torus-based protocol can in some cases significantly outperform it in terms of model accuracy and cosine similarity, hence making it a safer choice.
Key Contributions
- Novel torus-based one-time pad secure aggregation protocol for federated learning that achieves perfect privacy via the uniform distribution on the torus
- Eliminates the accuracy-complexity tradeoff inherent in finite-field secure aggregation caused by fixed-point modular arithmetic
- Experimental demonstration that the protocol matches non-secure aggregation accuracy and outperforms finite-field approaches in model accuracy and cosine similarity
🛡️ Threat Analysis
The protocol defends against gradient leakage in federated learning — the threat model is an adversary (malicious server or third party) inferring private training data from shared gradient updates. The torus-based masking guarantees perfect privacy by ensuring masked updates are uniformly distributed and thus reveal nothing about participants' data.