defense 2025

Fixed-point graph convolutional networks against adversarial attacks

Shakib Khan , A. Ben Hamza , Amr Youssef

0 citations · 42 references · Neural computing & application...

α

Published on arXiv

2511.00083

Input Manipulation Attack

OWASP ML Top 10 — ML01

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Fix-GCN outperforms competitive baselines across various benchmark graph datasets against both targeted (Nettack) and non-targeted (Mettack) adversarial attacks on graph structure.

Fix-GCN

Novel technique introduced


Adversarial attacks present a significant risk to the integrity and performance of graph neural networks, particularly in tasks where graph structure and node features are vulnerable to manipulation. In this paper, we present a novel model, called fixed-point iterative graph convolutional network (Fix-GCN), which achieves robustness against adversarial perturbations by effectively capturing higher-order node neighborhood information in the graph without additional memory or computational complexity. Specifically, we introduce a versatile spectral modulation filter and derive the feature propagation rule of our model using fixed-point iteration. Unlike traditional defense mechanisms that rely on additional design elements to counteract attacks, the proposed graph filter provides a flexible-pass filtering approach, allowing it to selectively attenuate high-frequency components while preserving low-frequency structural information in the graph signal. By iteratively updating node representations, our model offers a flexible and efficient framework for preserving essential graph information while mitigating the impact of adversarial manipulation. We demonstrate the effectiveness of the proposed model through extensive experiments on various benchmark graph datasets, showcasing its resilience against adversarial attacks.


Key Contributions

  • A spectral modulation filter that selectively attenuates high-frequency adversarial perturbations while preserving low-frequency graph structural information
  • A fixed-point iterative aggregation mechanism that captures higher-order neighborhood information without additional memory or computational complexity
  • Empirical demonstration of robustness against diverse adversarial attacks (Nettack, Mettack) across benchmark graph datasets, outperforming competitive baselines

🛡️ Threat Analysis

Input Manipulation Attack

The paper defends against evasion attacks (test-time adversarial perturbations to graph structure/node features) that cause GNN misclassification — directly targeting the input manipulation threat. The spectral filter attenuates adversarially injected high-frequency perturbations.

Data Poisoning Attack

The paper also defends against training-time poisoning attacks (Mettack for non-targeted, Nettack for targeted) that corrupt graph training data to degrade model performance — a data poisoning threat on GNNs.


Details

Domains
graph
Model Types
gnn
Threat Tags
white_boxtraining_timeinference_timetargeteduntargeteddigital
Applications
node classificationgraph neural networks