defense 2025

CryptGNN: Enabling Secure Inference for Graph Neural Networks

Pritam Sen 1, Yao Ma 2, Cristian Borcea 1

0 citations

α

Published on arXiv

2509.09107

Model Theft

OWASP ML Top 10 — ML05

Key Finding

CryptGNN provides provable security against up to P-1 colluding cloud parties while maintaining practical inference efficiency on GNN models.

CryptGNN

Novel technique introduced


We present CryptGNN, a secure and effective inference solution for third-party graph neural network (GNN) models in the cloud, which are accessed by clients as ML as a service (MLaaS). The main novelty of CryptGNN is its secure message passing and feature transformation layers using distributed secure multi-party computation (SMPC) techniques. CryptGNN protects the client's input data and graph structure from the cloud provider and the third-party model owner, and it protects the model parameters from the cloud provider and the clients. CryptGNN works with any number of SMPC parties, does not require a trusted server, and is provably secure even if P-1 out of P parties in the cloud collude. Theoretical analysis and empirical experiments demonstrate the security and efficiency of CryptGNN.


Key Contributions

  • Secure SMPC-based message passing and feature transformation layers for GNNs that leak no information to any single party
  • Provable security guarantee tolerating P-1 colluding parties out of P cloud parties without requiring a trusted server
  • Dual protection: shields client graph data/structure from cloud/model-owner and shields model parameters from clients

🛡️ Threat Analysis

Model Theft

A primary security goal of CryptGNN is protecting GNN model parameters from adversarial clients who might extract IP through repeated inference queries in MLaaS — a direct model theft defense using SMPC to ensure clients never observe plaintext model weights.


Details

Domains
graph
Model Types
gnn
Threat Tags
white_boxinference_time
Applications
graph neural network inferenceml as a service (mlaas)