defense 2025

Enhancing Robustness of Graph Neural Networks through p-Laplacian

Anuj Kumar Sirohi , Subhanu Halder , Kabir Kumar , Sandeep Kumar

0 citations · 20 references · arXiv

α

Published on arXiv

2511.06143

Input Manipulation Attack

OWASP ML Top 10 — ML01

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

pLAPGNN achieves competitive robustness against both poisoning and evasion attacks while being computationally more efficient than existing methods, particularly at high attack intensities.

pLAPGNN

Novel technique introduced


With the increase of data in day-to-day life, businesses and different stakeholders need to analyze the data for better predictions. Traditionally, relational data has been a source of various insights, but with the increase in computational power and the need to understand deeper relationships between entities, the need to design new techniques has arisen. For this graph data analysis has become an extraordinary tool for understanding the data, which reveals more realistic and flexible modelling of complex relationships. Recently, Graph Neural Networks (GNNs) have shown great promise in various applications, such as social network analysis, recommendation systems, drug discovery, and more. However, many adversarial attacks can happen over the data, whether during training (poisoning attack) or during testing (evasion attack), which can adversely manipulate the desired outcome from the GNN model. Therefore, it is crucial to make the GNNs robust to such attacks. The existing robustness methods are computationally demanding and perform poorly when the intensity of attack increases. This paper presents a computationally efficient framework, namely, pLAPGNN, based on weighted p-Laplacian for making GNNs robust. Empirical evaluation on real datasets establishes the efficacy and efficiency of the proposed method.


Key Contributions

  • pLAPGNN: a computationally efficient robustness framework for GNNs based on weighted p-Laplacian graph smoothing
  • Addresses both poisoning (training-time) and evasion (inference-time) adversarial attacks on graph-structured data
  • Empirically demonstrates superior efficiency and maintained performance at higher adversarial attack intensities compared to existing robustness methods

🛡️ Threat Analysis

Input Manipulation Attack

Explicitly defends against evasion attacks — adversarial graph structure/feature perturbations applied at inference time to manipulate GNN predictions.

Data Poisoning Attack

Explicitly defends against poisoning attacks — adversarial graph perturbations applied at training time to corrupt GNN learning.


Details

Domains
graph
Model Types
gnn
Threat Tags
training_timeinference_timedigital
Datasets
CoraCiteseerPolblogs
Applications
node classificationsocial network analysisrecommendation systemsdrug discovery