attack 2025

Evading Overlapping Community Detection via Proxy Node Injection

Dario Loi , Matteo Silvestri , Fabrizio Silvestri , Gabriele Tolomei

0 citations · 54 references · arXiv

α

Published on arXiv

2509.21211

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

DRL-based proxy node injection significantly outperforms existing baselines for hiding community membership in overlapping community detection settings where trivial evasion strategies fail.

Proxy Node Injection DRL

Novel technique introduced


Protecting privacy in social graphs requires preventing sensitive information, such as community affiliations, from being inferred by graph analysis, without substantially altering the graph topology. We address this through the problem of \emph{community membership hiding} (CMH), which seeks edge modifications that cause a target node to exit its original community, regardless of the detection algorithm employed. Prior work has focused on non-overlapping community detection, where trivial strategies often suffice, but real-world graphs are better modeled by overlapping communities, where such strategies fail. To the best of our knowledge, we are the first to formalize and address CMH in this setting. In this work, we propose a deep reinforcement learning (DRL) approach that learns effective modification policies, including the use of proxy nodes, while preserving graph structure. Experiments on real-world datasets show that our method significantly outperforms existing baselines in both effectiveness and efficiency, offering a principled tool for privacy-preserving graph modification with overlapping communities.


Key Contributions

  • First formal definition of community membership hiding (CMH) under overlapping community detection, showing trivial strategies that work for non-overlapping settings fail here
  • DRL-based framework that learns optimal edge modification policies using injected proxy nodes (Erdős-Rényi subgraph) connected to the target node
  • Empirical demonstration on real-world graph datasets that the proposed method significantly outperforms adapted baselines in both effectiveness and efficiency

🛡️ Threat Analysis

Input Manipulation Attack

Proposes adversarial graph edge modifications (proxy node injection) to cause community detection algorithms to misclassify a target node's community membership at inference time — this is an evasion attack on graph analysis systems, the primary contribution being the novel DRL-learned attack policy rather than a domain application improvement.


Details

Domains
graph
Model Types
rlgnn
Threat Tags
black_boxinference_timetargeted
Applications
social graph privacycommunity detection evasionoverlapping community detection