defense 2025

Adversarial Attacks and Defenses on Graph-aware Large Language Models (LLMs)

Iyiola E. Olatunji 1, Franziska Boenisch 2, Jing Xu 2, Adam Dziedzic 2

0 citations

α

Published on arXiv

2508.04894

Input Manipulation Attack

OWASP ML Top 10 — ML01

Data Poisoning Attack

OWASP ML Top 10 — ML02

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

LLAGA's node sequence template design increases attack vulnerability while GRAPHPROMPTER's GNN encoder provides greater robustness; both remain susceptible to imperceptible feature perturbation attacks

GALGUARD

Novel technique introduced


Large Language Models (LLMs) are increasingly integrated with graph-structured data for tasks like node classification, a domain traditionally dominated by Graph Neural Networks (GNNs). While this integration leverages rich relational information to improve task performance, their robustness against adversarial attacks remains unexplored. We take the first step to explore the vulnerabilities of graph-aware LLMs by leveraging existing adversarial attack methods tailored for graph-based models, including those for poisoning (training-time attacks) and evasion (test-time attacks), on two representative models, LLAGA (Chen et al. 2024) and GRAPHPROMPTER (Liu et al. 2024). Additionally, we discover a new attack surface for LLAGA where an attacker can inject malicious nodes as placeholders into the node sequence template to severely degrade its performance. Our systematic analysis reveals that certain design choices in graph encoding can enhance attack success, with specific findings that: (1) the node sequence template in LLAGA increases its vulnerability; (2) the GNN encoder used in GRAPHPROMPTER demonstrates greater robustness; and (3) both approaches remain susceptible to imperceptible feature perturbation attacks. Finally, we propose an end-to-end defense framework GALGUARD, that combines an LLM-based feature correction module to mitigate feature-level perturbations and adapted GNN defenses to protect against structural attacks.


Key Contributions

  • First systematic study of adversarial vulnerabilities in graph-aware LLMs (LLAGA and GRAPHPROMPTER) under both poisoning and evasion attack settings
  • Discovery of a novel attack surface specific to LLAGA: injecting malicious nodes as placeholders into the node sequence template to degrade performance
  • GALGUARD: an end-to-end defense framework combining LLM-based feature correction and adapted GNN defenses against feature-level and structural graph attacks

🛡️ Threat Analysis

Input Manipulation Attack

Studies evasion (test-time) adversarial attacks on graph-aware LLMs including imperceptible feature perturbation and structural attacks; GALGUARD's LLM-based feature correction module defends against these inference-time perturbations.

Data Poisoning Attack

Studies poisoning (training-time) attacks on graph-aware LLMs adapted from graph adversarial methods; GALGUARD's adapted GNN defenses protect against these training-time corruptions.


Details

Domains
graphnlp
Model Types
llmgnntransformer
Threat Tags
white_boxtraining_timeinference_timedigitaltargeteduntargeted
Applications
node classificationgraph-based reasoning