Differentially Private Federated Quantum Learning via Quantum Noise
Atit Pokharel , Ratun Rahman , Shaba Shaon , Thomas Morris , Dinh C. Nguyen
Published on arXiv
2508.20310
Model Inversion Attack
OWASP ML Top 10 — ML03
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Inherent quantum noise on NISQ devices can satisfy differential privacy guarantees in federated quantum learning, with a tunable tradeoff between privacy budget and robustness against adversarial examples.
DP-QFL
Novel technique introduced
Quantum federated learning (QFL) enables collaborative training of quantum machine learning (QML) models across distributed quantum devices without raw data exchange. However, QFL remains vulnerable to adversarial attacks, where shared QML model updates can be exploited to undermine information privacy. In the context of noisy intermediate-scale quantum (NISQ) devices, a key question arises: How can inherent quantum noise be leveraged to enforce differential privacy (DP) and protect model information during training and communication? This paper explores a novel DP mechanism that harnesses quantum noise to safeguard quantum models throughout the QFL process. By tuning noise variance through measurement shots and depolarizing channel strength, our approach achieves desired DP levels tailored to NISQ constraints. Simulations demonstrate the framework's effectiveness by examining the relationship between differential privacy budget and noise parameters, as well as the trade-off between security and training accuracy. Additionally, we demonstrate the framework's robustness against an adversarial attack designed to compromise model performance using adversarial examples, with evaluations based on critical metrics such as accuracy on adversarial examples, confidence scores for correct predictions, and attack success rates. The results reveal a tunable trade-off between privacy and robustness, providing an efficient solution for secure QFL on NISQ devices with significant potential for reliable quantum computing applications.
Key Contributions
- First framework to leverage inherent quantum noise (shot noise + depolarizing channel) as the primary DP mechanism in QFL, eliminating the need to add artificial noise on top of NISQ hardware noise.
- Tunable privacy-accuracy tradeoff achieved by adjusting measurement shots and depolarizing channel strength to meet target DP budgets.
- Empirical evaluation of DP-QFL robustness against quantum adversarial examples using attack success rate, confidence scores, and accuracy degradation metrics.
🛡️ Threat Analysis
A secondary but explicit contribution: the paper implements a quantum adversarial attack and evaluates the DP-QFL framework's robustness against adversarial examples at inference time, measuring attack success rate, accuracy under attack, and prediction confidence scores.
The primary contribution is a DP mechanism protecting against adversaries who exploit shared QML model updates to infer private training data — a gradient leakage / model inversion threat in the federated learning context. The quantum noise (shot noise + depolarizing channel) serves as the defense against this reconstruction threat.