attack arXiv Mar 21, 2026 · 18d ago
Matta Varun, Ajay Kumar Dhakar, Yuan Hong et al. · Indian Institute of Technology Kharagpur · University of Connecticut
Analyzes adversarial attacks on LDP-protected GNNs, exploring how privacy noise affects attack effectiveness and robustness
Input Manipulation Attack Data Poisoning Attack graph
Graph neural network (GNN) is a powerful tool for analyzing graph-structured data. However, their vulnerability to adversarial attacks raises serious concerns, especially when dealing with sensitive information. Local Differential Privacy (LDP) offers a privacy-preserving framework for training GNNs, but its impact on adversarial robustness remains underexplored. This paper investigates adversarial attacks on LDP-protected GNNs. We explore how the privacy guarantees of LDP can be leveraged or hindered by adversarial perturbations. The effectiveness of existing attack methods on LDP-protected GNNs are analyzed and potential challenges in crafting adversarial examples under LDP constraints are discussed. Additionally, we suggest directions for defending LDP-protected GNNs against adversarial attacks. This work investigates the interplay between privacy and security in graph learning, highlighting the need for robust and privacy-preserving GNN architectures.
gnn Indian Institute of Technology Kharagpur · University of Connecticut