defense 2025

Adversarial Signed Graph Learning with Differential Privacy

Haobin Ke , Sen Zhang , Qingqing Ye , Xun Ran , Haibo Hu

0 citations · 43 references · arXiv

α

Published on arXiv

2512.00307

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

ASGL achieves favorable privacy-utility trade-offs on signed graph downstream tasks while satisfying node-level differential privacy, outperforming existing DP graph methods that suffer from cascading errors or high gradient sensitivity on signed graphs.

ASGL

Novel technique introduced


Signed graphs with positive and negative edges can model complex relationships in social networks. Leveraging on balance theory that deduces edge signs from multi-hop node pairs, signed graph learning can generate node embeddings that preserve both structural and sign information. However, training on sensitive signed graphs raises significant privacy concerns, as model parameters may leak private link information. Existing protection methods with differential privacy (DP) typically rely on edge or gradient perturbation for unsigned graph protection. Yet, they are not well-suited for signed graphs, mainly because edge perturbation tends to cascading errors in edge sign inference under balance theory, while gradient perturbation increases sensitivity due to node interdependence and gradient polarity change caused by sign flips, resulting in larger noise injection. In this paper, motivated by the robustness of adversarial learning to noisy interactions, we present ASGL, a privacy-preserving adversarial signed graph learning method that preserves high utility while achieving node-level DP. We first decompose signed graphs into positive and negative subgraphs based on edge signs, and then design a gradient-perturbed adversarial module to approximate the true signed connectivity distribution. In particular, the gradient perturbation helps mitigate cascading errors, while the subgraph separation facilitates sensitivity reduction. Further, we devise a constrained breadth-first search tree strategy that fuses with balance theory to identify the edge signs between generated node pairs. This strategy also enables gradient decoupling, thereby effectively lowering gradient sensitivity. Extensive experiments on real-world datasets show that ASGL achieves favorable privacy-utility trade-offs across multiple downstream tasks.


Key Contributions

  • ASGL: a node-level differentially private training framework for signed graphs that avoids direct edge perturbation (which causes cascading errors under balance theory) by using a gradient-perturbed adversarial module
  • Signed graph decomposition into positive/negative subgraphs combined with a constrained BFS-tree strategy to reduce gradient sensitivity and enable gradient decoupling
  • Formal proof of node-level DP guarantees with empirical privacy-utility trade-off evaluation across multiple downstream tasks

🛡️ Threat Analysis

Membership Inference Attack

Link stealing attacks — which the paper explicitly defends against — are edge-level membership inference attacks: an adversary queries the trained model to determine whether a specific edge (and its sign) was present in the training graph. ASGL's DP training provides formal bounds on what an adversary can infer about individual links.


Details

Domains
graph
Model Types
gnn
Threat Tags
training_timeblack_box
Datasets
real-world signed social network datasets (Bitcoin, Epinions, Slashdot implied by context)
Applications
signed graph learningsocial network analysisedge sign predictionnode clustering