attack 2025

JANUS: A Dual-Constraint Generative Framework for Stealthy Node Injection Attacks

Jiahao Zhang , Xiaobing Pei , Zhaokun Zhong , Wenqiang Hao , Zhenghao Tang

0 citations

α

Published on arXiv

2509.13266

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

JANUS significantly outperforms existing node injection attack methods on multiple standard graph datasets in both attack effectiveness and stealthiness metrics.

JANUS (Joint Alignment of Nodal and Universal Structures)

Novel technique introduced


Graph Neural Networks (GNNs) have demonstrated remarkable performance across various applications, yet they are vulnerable to sophisticated adversarial attacks, particularly node injection attacks. The success of such attacks heavily relies on their stealthiness, the ability to blend in with the original graph and evade detection. However, existing methods often achieve stealthiness by relying on indirect proxy metrics, lacking consideration for the fundamental characteristics of the injected content, or focusing only on imitating local structures, which leads to the problem of local myopia. To overcome these limitations, we propose a dual-constraint stealthy node injection framework, called Joint Alignment of Nodal and Universal Structures (JANUS). At the local level, we introduce a local feature manifold alignment strategy to achieve geometric consistency in the feature space. At the global level, we incorporate structured latent variables and maximize the mutual information with the generated structures, ensuring the injected structures are consistent with the semantic patterns of the original graph. We model the injection attack as a sequential decision process, which is optimized by a reinforcement learning agent. Experiments on multiple standard datasets demonstrate that the JANUS framework significantly outperforms existing methods in terms of both attack effectiveness and stealthiness.


Key Contributions

  • Local feature manifold alignment strategy ensuring geometric consistency between injected and original node features in the feature space
  • Global semantic consistency via structured latent variables and mutual information maximization to match graph-level structural patterns
  • RL-based sequential decision process for optimizing both node features and edge connections of injected nodes jointly

🛡️ Threat Analysis

Input Manipulation Attack

Node injection attacks craft adversarial inputs (injected nodes with fabricated features and edges) to cause GNNs to misclassify target nodes. The paper's RL-based optimization of injected content to evade detection while maximizing attack success is fundamentally an inference-time input manipulation attack against graph models.


Details

Domains
graph
Model Types
gnn
Threat Tags
black_boxtargeteddigitalinference_time
Datasets
CoraCiteseerPolblogs
Applications
node classificationgraph-based ml systems