Safeguarding Graph Neural Networks against Topology Inference Attacks
Jie Fu 1, Yuan Hong 2, Zhili Chen 3, Wendy Hui Wang 1
Published on arXiv
2509.05429
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
PGR significantly reduces topology leakage from GNNs with minimal accuracy degradation, outperforming edge-level differential privacy baselines that either fail to protect topology or severely compromise utility.
PGR (Private Graph Reconstruction)
Novel technique introduced
Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, their widespread adoption has raised serious privacy concerns. While prior research has primarily focused on edge-level privacy, a critical yet underexplored threat lies in topology privacy - the confidentiality of the graph's overall structure. In this work, we present a comprehensive study on topology privacy risks in GNNs, revealing their vulnerability to graph-level inference attacks. To this end, we propose a suite of Topology Inference Attacks (TIAs) that can reconstruct the structure of a target training graph using only black-box access to a GNN model. Our findings show that GNNs are highly susceptible to these attacks, and that existing edge-level differential privacy mechanisms are insufficient as they either fail to mitigate the risk or severely compromise model accuracy. To address this challenge, we introduce Private Graph Reconstruction (PGR), a novel defense framework designed to protect topology privacy while maintaining model accuracy. PGR is formulated as a bi-level optimization problem, where a synthetic training graph is iteratively generated using meta-gradients, and the GNN model is concurrently updated based on the evolving graph. Extensive experiments demonstrate that PGR significantly reduces topology leakage with minimal impact on model accuracy. Our code is available at https://github.com/JeffffffFu/PGR.
Key Contributions
- Topology Inference Attacks (TIAs): a suite of black-box attacks that reconstruct the structural topology of a GNN's training graph
- Empirical demonstration that existing edge-level differential privacy mechanisms fail to adequately protect graph topology privacy
- Private Graph Reconstruction (PGR): a bi-level optimization defense that generates synthetic training graphs via meta-gradients to minimize topology leakage while preserving model accuracy
🛡️ Threat Analysis
The adversary reconstructs the private training graph's topology using only black-box model queries — this is training data reconstruction from a trained model, the defining characteristic of ML03. The defense (PGR) is specifically designed to prevent this topology leakage, directly targeting the reconstruction attack threat model.