defense 2025

RobQFL: Robust Quantum Federated Learning in Adversarial Environment

Walid El Maouaki 1,2, Nouhaila Innan 2,3, Alberto Marchisio 2,3, Taoufik Said 1, Muhammad Shafique 2,3, Mohamed Bennai 1

0 citations

α

Published on arXiv

2509.04914

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Adversarially training only 20–50% of clients boosts robustness by ~15 pp at moderate perturbation strengths (ε≤0.1) with less than 2 pp clean accuracy cost; label-sorted non-IID splits halve robustness, revealing data heterogeneity as the dominant vulnerability.

RobQFL

Novel technique introduced


Quantum Federated Learning (QFL) merges privacy-preserving federation with quantum computing gains, yet its resilience to adversarial noise is unknown. We first show that QFL is as fragile as centralized quantum learning. We propose Robust Quantum Federated Learning (RobQFL), embedding adversarial training directly into the federated loop. RobQFL exposes tunable axes: client coverage $γ$ (0-100\%), perturbation scheduling (fixed-$\varepsilon$ vs $\varepsilon$-mixes), and optimization (fine-tune vs scratch), and distils the resulting $γ\times \varepsilon$ surface into two metrics: Accuracy-Robustness Area and Robustness Volume. On 15-client simulations with MNIST and Fashion-MNIST, IID and Non-IID conditions, training only 20-50\% clients adversarially boosts $\varepsilon \leq 0.1$ accuracy $\sim$15 pp at $< 2$ pp clean-accuracy cost; fine-tuning adds 3-5 pp. With $\geq$75\% coverage, a moderate $\varepsilon$-mix is optimal, while high-$\varepsilon$ schedules help only at 100\% coverage. Label-sorted non-IID splits halve robustness, underscoring data heterogeneity as a dominant risk.


Key Contributions

  • First head-to-head comparison showing QFL is as vulnerable to adversarial attacks as centralized quantum learning
  • RobQFL framework with tunable coverage parameter (γ), ε-scheduler, and dual optimization modes (fine-tune vs scratch) for adversarial training in QFL
  • Two novel evaluation metrics — Accuracy-Robustness Area (ARA) and Robustness Volume (RV) — for assessing defense effectiveness across attack intensities and coverage levels

🛡️ Threat Analysis

Input Manipulation Attack

Paper defends against adversarial examples (PGD-based input perturbations) targeting quantum neural networks at inference time; the primary contribution is embedding adversarial training into the federated loop as a countermeasure.


Details

Domains
federated-learning
Model Types
federatedtraditional_ml
Threat Tags
white_boxinference_timedigitaluntargeted
Datasets
MNISTFashion-MNIST
Applications
quantum machine learningfederated learningimage classification