survey 2025

Adversarial Robustness in Distributed Quantum Machine Learning

Pouya Kananian , Hans-Arno Jacobsen

0 citations

α

Published on arXiv

2508.11848

Input Manipulation Attack

OWASP ML Top 10 — ML01

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Identifies quantum federated learning as the more studied distribution paradigm for adversarial robustness, while adversarial robustness of circuit-distribution methods (cutting and teleportation) remains largely an open problem.


Studying adversarial robustness of quantum machine learning (QML) models is essential in order to understand their potential advantages over classical models and build trustworthy systems. Distributing QML models allows leveraging multiple quantum processors to overcome the limitations of individual devices and build scalable systems. However, this distribution can affect their adversarial robustness, potentially making them more vulnerable to new attacks. Key paradigms in distributed QML include federated learning, which, similar to classical models, involves training a shared model on local data and sending only the model updates, as well as circuit distribution methods inherent to quantum computing, such as circuit cutting and teleportation-based techniques. These quantum-specific methods enable the distributed execution of quantum circuits across multiple devices. This work reviews the differences between these distribution methods, summarizes existing approaches on the adversarial robustness of QML models when distributed using each paradigm, and discusses open questions in this area.


Key Contributions

  • Reviews and contrasts adversarial robustness implications of different quantum ML distribution paradigms: federated learning, circuit cutting, and teleportation-based methods
  • Summarizes existing attacks and defenses for adversarial and privacy-leaking threats in quantum federated learning
  • Identifies open research questions in adversarial robustness for distributed quantum ML, particularly for circuit distribution methods that have received less attention

🛡️ Threat Analysis

Input Manipulation Attack

Central theme of the survey is adversarial robustness of QML models — adversarial examples crafted to cause misclassification in variational quantum classifiers and distributed quantum circuits are extensively reviewed.

Data Poisoning Attack

A major section reviews Byzantine and poisoning attacks against quantum federated learning systems, including data poisoning by malicious quantum FL participants and corresponding defenses.


Details

Domains
federated-learning
Model Types
federatedtraditional_ml
Threat Tags
training_timeinference_timewhite_boxblack_box
Applications
quantum machine learningfederated learningquantum circuit execution