Key Principles of Graph Machine Learning: Representation, Robustness, and Generalization
Yassine Abbahaddou 1, Céline Hudelot 2, Charlotte Laclau 3, Davide Bacciu 4, Thomas Gärtner 5, Marc Lelarge 6, Michalis Vazirgiannis 1, Fragkiskos D. Malliaros 2, Johannes F. Lutzeyer 1
6 INRIA
Published on arXiv
2602.01139
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Orthonormalization and noise-based defenses improve GNN robustness against adversarial graph perturbations while preserving model utility.
Graph Neural Networks (GNNs) have emerged as powerful tools for learning representations from structured data. Despite their growing popularity and success across various applications, GNNs encounter several challenges that limit their performance. in their generalization, robustness to adversarial perturbations, and the effectiveness of their representation learning capabilities. In this dissertation, I investigate these core aspects through three main contributions: (1) developing new representation learning techniques based on Graph Shift Operators (GSOs, aiming for enhanced performance across various contexts and applications, (2) introducing generalization-enhancing methods through graph data augmentation, and (3) developing more robust GNNs by leveraging orthonormalization techniques and noise-based defenses against adversarial attacks. By addressing these challenges, my work provides a more principled understanding of the limitations and potential of GNNs.
Key Contributions
- Novel Graph Shift Operator (GSO)-based representation learning techniques for improved GNN performance
- Graph data augmentation methods to enhance GNN generalization
- Orthonormalization and noise-based defense mechanisms to improve GNN adversarial robustness
🛡️ Threat Analysis
The robustness component explicitly addresses adversarial perturbations on graphs at inference time and proposes defenses (orthonormalization, noise-based defenses) to improve GNN resistance to such attacks.