attack 2025

Unveiling the Vulnerability of Graph-LLMs: An Interpretable Multi-Dimensional Adversarial Attack on TAGs

Bowen Fan 1,2, Zhilin Guo 1, Xunkai Li 1, Yihan Zhou 1, Bing Zhou 3, Zhenjun Li 3, Rong-Hua Li 1, Guoren Wang 1

0 citations · 64 references · arXiv

α

Published on arXiv

2510.12233

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

IMDGA demonstrates superior attack effectiveness, stealthiness, and interpretability over existing structure-only and text-only attack baselines on diverse TAG datasets and Graph-LLM architectures

IMDGA (Interpretable Multi-Dimensional Graph Attack)

Novel technique introduced


Graph Neural Networks (GNNs) have become a pivotal framework for modeling graph-structured data, enabling a wide range of applications from social network analysis to molecular chemistry. By integrating large language models (LLMs), text-attributed graphs (TAGs) enhance node representations with rich textual semantics, significantly boosting the expressive power of graph-based learning. However, this sophisticated synergy introduces critical vulnerabilities, as Graph-LLMs are susceptible to adversarial attacks on both their structural topology and textual attributes. Although specialized attack methods have been designed for each of these aspects, no work has yet unified them into a comprehensive approach. In this work, we propose the Interpretable Multi-Dimensional Graph Attack (IMDGA), a novel human-centric adversarial attack framework designed to orchestrate multi-level perturbations across both graph structure and textual features. IMDGA utilizes three tightly integrated modules to craft attacks that balance interpretability and impact, enabling a deeper understanding of Graph-LLM vulnerabilities. Through rigorous theoretical analysis and comprehensive empirical evaluations on diverse datasets and architectures, IMDGA demonstrates superior interpretability, attack effectiveness, stealthiness, and robustness compared to existing methods. By exposing critical weaknesses in TAG representation learning, this work uncovers a previously underexplored semantic dimension of vulnerability in Graph-LLMs, offering valuable insights for improving their resilience. Our code and resources are publicly available at https://anonymous.4open.science/r/IMDGA-7289.


Key Contributions

  • IMDGA: a unified three-module adversarial attack framework that coordinates perturbations across graph structure and textual node features of Graph-LLMs
  • Human-centric interpretable attack design that balances stealthiness, attack effectiveness, and human readability of adversarial text modifications
  • Theoretical analysis exposing a previously underexplored semantic vulnerability dimension in text-attributed graph representation learning

🛡️ Threat Analysis

Input Manipulation Attack

IMDGA crafts adversarial perturbations at inference time across both graph structural topology and textual node attributes of Graph-LLM systems to cause misclassification — a multi-dimensional evasion attack combining structural and semantic manipulation.


Details

Domains
graphnlp
Model Types
gnnllmtransformer
Threat Tags
white_boxinference_timetargeteddigital
Applications
graph node classificationtext-attributed graph learningsocial network analysis