attack 2025

Unnoticeable Community Deception via Multi-objective Optimization

Junyuan Fang 1, Huimin Liu 2, Yueqi Peng 2, Jiajing Wu 2, Zibin Zheng 2, Chi K. Tse 1

0 citations

α

Published on arXiv

2509.01438

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Multi-objective community deception strategies outperform existing baselines on three benchmark graph datasets while maintaining unnoticeability


Community detection in graphs is crucial for understanding the organization of nodes into densely connected clusters. While numerous strategies have been developed to identify these clusters, the success of community detection can lead to privacy and information security concerns, as individuals may not want their personal information exposed. To address this, community deception methods have been proposed to reduce the effectiveness of detection algorithms. Nevertheless, several limitations, such as the rationality of evaluation metrics and the unnoticeability of attacks, have been ignored in current deception methods. Therefore, in this work, we first investigate the limitations of the widely used deception metric, i.e., the decrease of modularity, through empirical studies. Then, we propose a new deception metric, and combine this new metric together with the attack budget to model the unnoticeable community deception task as a multi-objective optimization problem. To further improve the deception performance, we propose two variant methods by incorporating the degree-biased and community-biased candidate node selection mechanisms. Extensive experiments on three benchmark datasets demonstrate the superiority of the proposed community deception strategies.


Key Contributions

  • Empirical analysis identifying limitations of modularity decrease as a community deception evaluation metric, leading to a new deception metric
  • Multi-objective optimization framework that jointly minimizes detection effectiveness and maximizes unnoticeability under an attack budget
  • Degree-biased and community-biased candidate node selection mechanisms to improve deception performance

🛡️ Threat Analysis

Input Manipulation Attack

Proposes adversarial perturbations to graph structure (adding/removing edges) that cause community detection algorithms — including GNN-based detectors — to fail at inference time. This is an evasion attack on graph ML models, analogous to adversarial examples in vision.


Details

Domains
graph
Model Types
gnntraditional_ml
Threat Tags
black_boxinference_timeuntargeteddigital
Applications
community detectiongraph clusteringsocial network analysis