defense arXiv Oct 17, 2025 · Oct 2025
Yuyuan Feng, Bin Ma, Enyan Dai · Xiamen University · The Hong Kong University of Science and Technology (Guangzhou)
Mixture-of-Experts GNN framework that simultaneously defends against backdoor, edge manipulation, and node injection attacks via diversity loss and robustness-aware routing
Model Poisoning Input Manipulation Attack graph
Extensive research has highlighted the vulnerability of graph neural networks (GNNs) to adversarial attacks, including manipulation, node injection, and the recently emerging threat of backdoor attacks. However, existing defenses typically focus on a single type of attack, lacking a unified approach to simultaneously defend against multiple threats. In this work, we leverage the flexibility of the Mixture of Experts (MoE) architecture to design a scalable and unified framework for defending against backdoor, edge manipulation, and node injection attacks. Specifically, we propose an MI-based logic diversity loss to encourage individual experts to focus on distinct neighborhood structures in their decision processes, thus ensuring a sufficient subset of experts remains unaffected under perturbations in local structures. Moreover, we introduce a robustness-aware router that identifies perturbation patterns and adaptively routes perturbed nodes to corresponding robust experts. Extensive experiments conducted under various adversarial settings demonstrate that our method consistently achieves superior robustness against multiple graph adversarial attacks.
gnn Xiamen University · The Hong Kong University of Science and Technology (Guangzhou)