defense 2025

Enhancing Federated Learning Privacy with QUBO

Andras Ferenczi , Sutapa Samanta , Dagen Wang , Todd Hodges

0 citations · 25 references · arXiv

α

Published on arXiv

2511.02785

Membership Inference Attack

OWASP ML Top 10 — ML04

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

QUBO client selection reduces per-round privacy exposure by 95.2% and cumulative exposure by 49% on MNIST with 300 clients, leaving 147 clients' updates entirely unused while preserving model accuracy.

QUBO-based client selection

Novel technique introduced


Federated learning (FL) is a widely used method for training machine learning (ML) models in a scalable way while preserving privacy (i.e., without centralizing raw data). Prior research shows that the risk of exposing sensitive data increases cumulatively as the number of iterations where a client's updates are included in the aggregated model increase. Attackers can launch membership inference attacks (MIA; deciding whether a sample or client participated), property inference attacks (PIA; inferring attributes of a client's data), and model inversion attacks (MI; reconstructing inputs), thereby inferring client-specific attributes and, in some cases, reconstructing inputs. In this paper, we mitigate risk by substantially reducing per client exposure using a quantum computing-inspired quadratic unconstrained binary optimization (QUBO) formulation that selects a small subset of client updates most relevant for each training round. In this work, we focus on two threat vectors: (i) information leakage by clients during training and (ii) adversaries who can query or obtain the global model. We assume a trusted central server and do not model server compromise. This method also assumes that the server has access to a validation/test set with global data distribution. Experiments on the MNIST dataset with 300 clients in 20 rounds showed a 95.2% per-round and 49% cumulative privacy exposure reduction, with 147 clients' updates never being used during training while maintaining in general the full-aggregation accuracy or even better. The method proved to be efficient at lower scale and more complex model as well. A CINIC-10 dataset-based experiment with 30 clients resulted in 82% per-round privacy improvement and 33% cumulative privacy.


Key Contributions

  • QUBO (quantum-inspired combinatorial optimization) formulation that selects the minimal relevant subset of client updates per federated training round to reduce cumulative privacy exposure
  • Achieves 95.2% per-round and 49% cumulative privacy exposure reduction on MNIST (300 clients, 20 rounds) while maintaining or exceeding full-aggregation model accuracy
  • Demonstrates generalization to CINIC-10 with 30 clients, yielding 82% per-round and 33% cumulative privacy improvement

🛡️ Threat Analysis

Model Inversion Attack

Paper also explicitly defends against model inversion attacks (MI) where adversaries reconstruct client inputs from the global model — the same exposure-reduction mechanism limits gradient leakage available for reconstruction.

Membership Inference Attack

Paper explicitly targets membership inference attacks (MIA) in federated learning as the primary threat; the QUBO client selection mechanism limits how often each client's updates appear in the global model, directly reducing an adversary's ability to infer participation.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timeblack_box
Datasets
MNISTCINIC-10
Applications
federated learningprivacy-preserving machine learning