attack 2025

SEA: Spectral Edge Attack on Graph Neural Networks

Yongyu Wang

0 citations · 18 references · arXiv

α

Published on arXiv

2512.08964

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

SEA achieves effective GNN performance degradation by adjusting weights of a small subset of spectrally vulnerable edges without any topological changes, evading defenses based on structural perturbation detection such as TSVD and Pro-GNN.

SEA (Spectral Edge Attack)

Novel technique introduced


Graph neural networks (GNNs) have been widely applied in a variety of domains. However, the very ability of graphs to represent complex data structures is both the key strength of GNNs and a major source of their vulnerability. Recent studies have shown that attacking GNNs by maliciously perturbing the underlying graph can severely degrade their performance. For attack methods, the central challenge is to maintain attack effectiveness while remaining difficult to detect. Most existing attacks require modifying the graph structure, such as adding or deleting edges, which is relatively easy to notice. To address this problem, this paper proposes a new attack model that employs spectral adversarial robustness evaluation to quantitatively analyze the vulnerability of each edge in a graph. By precisely targeting the weakest links, our method can achieve effective attacks without changing the connectivity pattern of edges in the graph, for example by subtly adjusting the weights of a small subset of the most vulnerable edges. We apply the proposed method to attack several classical graph neural network architectures, and experimental results show that our attack is highly effective.


Key Contributions

  • First algorithm to use spectral adversarial robustness evaluation to quantitatively rank edge vulnerability and select attack targets in a GNN graph.
  • Weight-only perturbation attack that achieves effective misclassification without adding, deleting, or rewiring edges, making it significantly harder to detect than topology-altering attacks.
  • Demonstrated attack effectiveness against classical GNN architectures (GCN, etc.) on real-world graph datasets.

🛡️ Threat Analysis

Input Manipulation Attack

Crafts adversarial perturbations on the graph input (edge weight adjustments) to cause misclassification/performance degradation in GNNs at inference time — a classic evasion/input manipulation attack, adapted for graph-structured inputs with a spectral vulnerability scoring method to select which edges to perturb.


Details

Domains
graph
Model Types
gnn
Threat Tags
white_boxinference_timeuntargeteddigital
Datasets
CoraCiteseerPubMed
Applications
node classificationsocial network analysiscitation network classification