attack 2026

Adversarial Attacks on Locally Private Graph Neural Networks

Matta Varun 1, Ajay Kumar Dhakar 2, Yuan Hong 1, Shamik Sural 1

0 citations

α

Published on arXiv

2603.20746

Input Manipulation Attack

OWASP ML Top 10 — ML01

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Investigates how LDP privacy guarantees can be leveraged or hindered by adversarial perturbations, highlighting vulnerabilities in privacy-preserving GNN architectures


Graph neural network (GNN) is a powerful tool for analyzing graph-structured data. However, their vulnerability to adversarial attacks raises serious concerns, especially when dealing with sensitive information. Local Differential Privacy (LDP) offers a privacy-preserving framework for training GNNs, but its impact on adversarial robustness remains underexplored. This paper investigates adversarial attacks on LDP-protected GNNs. We explore how the privacy guarantees of LDP can be leveraged or hindered by adversarial perturbations. The effectiveness of existing attack methods on LDP-protected GNNs are analyzed and potential challenges in crafting adversarial examples under LDP constraints are discussed. Additionally, we suggest directions for defending LDP-protected GNNs against adversarial attacks. This work investigates the interplay between privacy and security in graph learning, highlighting the need for robust and privacy-preserving GNN architectures.


Key Contributions

  • Analyzes effectiveness of existing adversarial attack methods (node injection, label-flipping, inference attacks, poisoning) on LDP-protected GNNs
  • Explores how privacy noise from LDP mechanisms interacts with adversarial perturbations
  • Identifies challenges in crafting adversarial examples under LDP constraints and suggests defense directions

🛡️ Threat Analysis

Input Manipulation Attack

Primary focus on adversarial perturbations targeting GNN predictions at inference time (evasion attacks) and training time manipulation to cause misclassification.

Data Poisoning Attack

Paper explicitly discusses poisoning attacks and label-flipping attacks on LDP-protected GNNs during training phase.


Details

Domains
graph
Model Types
gnn
Threat Tags
training_timeinference_timetargeteduntargeted
Applications
node classificationcommunity detectionlink predictionfinancial fraud detection