Leveraging Vulnerabilities in Temporal Graph Neural Networks via Strategic High-Impact Assaults
Dong Hyun Jeon 1, Lijing Zhu 2, Haifang Li 3, Pengze Li 3, Jingna Feng 3, Tiehang Duan 4, Houbing Herbert Song 5, Cui Tao 3, Shuteng Niu 3
Published on arXiv
2509.25418
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
HIA achieves up to 35.55% decrease in Mean Reciprocal Rank (MRR) on TGNN link prediction tasks, substantially outperforming state-of-the-art adversarial baselines.
High Impact Attack (HIA)
Novel technique introduced
Temporal Graph Neural Networks (TGNNs) have become indispensable for analyzing dynamic graphs in critical applications such as social networks, communication systems, and financial networks. However, the robustness of TGNNs against adversarial attacks, particularly sophisticated attacks that exploit the temporal dimension, remains a significant challenge. Existing attack methods for Spatio-Temporal Dynamic Graphs (STDGs) often rely on simplistic, easily detectable perturbations (e.g., random edge additions/deletions) and fail to strategically target the most influential nodes and edges for maximum impact. We introduce the High Impact Attack (HIA), a novel restricted black-box attack framework specifically designed to overcome these limitations and expose critical vulnerabilities in TGNNs. HIA leverages a data-driven surrogate model to identify structurally important nodes (central to network connectivity) and dynamically important nodes (critical for the graph's temporal evolution). It then employs a hybrid perturbation strategy, combining strategic edge injection (to create misleading connections) and targeted edge deletion (to disrupt essential pathways), maximizing TGNN performance degradation. Importantly, HIA minimizes the number of perturbations to enhance stealth, making it more challenging to detect. Comprehensive experiments on five real-world datasets and four representative TGNN architectures (TGN, JODIE, DySAT, and TGAT) demonstrate that HIA significantly reduces TGNN accuracy on the link prediction task, achieving up to a 35.55% decrease in Mean Reciprocal Rank (MRR) - a substantial improvement over state-of-the-art baselines. These results highlight fundamental vulnerabilities in current STDG models and underscore the urgent need for robust defenses that account for both structural and temporal dynamics.
Key Contributions
- Novel restricted black-box adversarial attack (HIA) for Temporal GNNs that exploits both structural importance (centrality) and temporal importance of nodes
- Hybrid perturbation strategy combining strategic edge injection and targeted edge deletion to maximize TGNN performance degradation while minimizing detectable perturbations
- Comprehensive evaluation across 5 real-world datasets and 4 TGNN architectures (TGN, JODIE, DySAT, TGAT) showing up to 35.55% MRR decrease over SOTA baselines
🛡️ Threat Analysis
HIA crafts adversarial perturbations to the graph input structure (strategic edge injection and targeted edge deletion) to cause inference-time performance degradation on TGNN link prediction tasks — directly analogous to adversarial example attacks, but operating on graph topology rather than image pixel space. The attack targets model outputs at inference time, not training data corruption.