Robustness Verification of Graph Neural Networks Via Lightweight Satisfiability Testing
Chia-Hsuan Lu 1, Tony Tan 2,3, Michael Benedikt 1
Published on arXiv
2510.18591
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Replacing MIP solvers with polynomial-time partial SAT solvers improves state-of-the-art structural robustness verification for GNNs across diverse architectures and datasets.
RobLight
Novel technique introduced
Graph neural networks (GNNs) are the predominant architecture for learning over graphs. As with any machine learning model, and important issue is the detection of adversarial attacks, where an adversary can change the output with a small perturbation of the input. Techniques for solving the adversarial robustness problem - determining whether such an attack exists - were originally developed for image classification, but there are variants for many other machine learning architectures. In the case of graph learning, the attack model usually considers changes to the graph structure in addition to or instead of the numerical features of the input, and the state of the art techniques in the area proceed via reduction to constraint solving, working on top of powerful solvers, e.g. for mixed integer programming. We show that it is possible to improve on the state of the art in structural robustness by replacing the use of powerful solvers by calls to efficient partial solvers, which run in polynomial time but may be incomplete. We evaluate our tool RobLight on a diverse set of GNN variants and datasets.
Key Contributions
- Proposes replacing powerful MIP solvers with efficient partial satisfiability solvers (polynomial time) for GNN structural robustness verification
- Builds RobLight, a practical robustness verification tool that improves on state-of-the-art for structural GNN robustness
- Evaluates RobLight across a diverse set of GNN variants and graph datasets
🛡️ Threat Analysis
The paper directly addresses adversarial input manipulation of GNNs — specifically structural perturbations that change model outputs. Robustness verification determines whether such evasion attacks exist, placing this squarely in the adversarial examples / input manipulation threat space.