Lyapunov Stable Graph Neural Flow
Haoyu Chu 1, Xiaotong Chen , Wei Zhou , Wenjun Cui , Kai Zhao , Shikui Wei , Qiyu Kang 2
Published on arXiv
2603.12557
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Substantially outperforms base neural flows and state-of-the-art baselines across standard benchmarks and various adversarial attack scenarios
Lyapunov Stable Graph Neural Flow
Novel technique introduced
Graph Neural Networks (GNNs) are highly vulnerable to adversarial perturbations in both topology and features, making the learning of robust representations a critical challenge. In this work, we bridge GNNs with control theory to introduce a novel defense framework grounded in integer- and fractional-order Lyapunov stability. Unlike conventional strategies that rely on resource-heavy adversarial training or data purification, our approach fundamentally constrains the underlying feature-update dynamics of the GNN. We propose an adaptive, learnable Lyapunov function paired with a novel projection mechanism that maps the network's state into a stable space, thereby offering theoretically provable stability guarantees. Notably, this mechanism is orthogonal to existing defenses, allowing for seamless integration with techniques like adversarial training to achieve cumulative robustness. Extensive experiments demonstrate that our Lyapunov-stable graph neural flows substantially outperform base neural flows and state-of-the-art baselines across standard benchmarks and various adversarial attack scenarios.
Key Contributions
- Novel defense framework grounded in integer- and fractional-order Lyapunov stability theory for GNNs
- Adaptive learnable Lyapunov function with projection mechanism providing theoretical stability guarantees
- Orthogonal defense mechanism that integrates with existing techniques like adversarial training for cumulative robustness
🛡️ Threat Analysis
Paper explicitly addresses adversarial perturbations on GNN inputs (both topology and features) that cause misclassification at inference time. The defense constrains feature-update dynamics to provide stability against these adversarial examples. Evaluated against various adversarial attack scenarios.