From Models to Network Topologies: A Topology Inference Attack in Decentralized Federated Learning
Chao Feng 1, Yuanzhe Gao 1, Alberto Huertas Celdran 1,2, Gerome Bovet 3, Burkhard Stiller 1
Published on arXiv
2501.03119
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
Analyzing only the model parameters of each DFL node is sufficient to accurately reconstruct the overlay network topology, revealing a previously unexplored and critical privacy vulnerability in decentralized federated learning systems.
Topology Inference Attack (TIA)
Novel technique introduced
Federated Learning (FL) is widely recognized as a privacy-preserving Machine Learning paradigm due to its model-sharing mechanism that avoids direct data exchange. Nevertheless, model training leaves exploitable traces that can be used to infer sensitive information. In Decentralized FL (DFL), the topology, defining how participants are connected, plays a crucial role in shaping the model's privacy, robustness, and convergence. However, the topology introduces an unexplored vulnerability: attackers can exploit it to infer participant relationships and launch targeted attacks. This work uncovers the hidden risks of DFL topologies by proposing a novel Topology Inference Attack that infers the topology solely from model behavior. A taxonomy of topology inference attacks is introduced, categorizing them by the attacker's capabilities and knowledge. Practical attack strategies are designed for various scenarios, and experiments are conducted to identify key factors influencing attack success. The results demonstrate that analyzing only the model of each node can accurately infer the DFL topology, highlighting a critical privacy risk in DFL systems. These findings offer insights for improving privacy preservation in DFL environments.
Key Contributions
- Novel Topology Inference Attack (TIA) that infers DFL overlay network topology solely from model behavioral traces, without access to raw data or communication logs
- Taxonomy of topology inference attacks categorizing attacker capabilities and knowledge levels, with practical strategies for each scenario
- Experimental validation across multiple datasets and real-world DFL topologies, identifying key factors influencing attack success and offering insights for defensive design
🛡️ Threat Analysis
The attack recovers sensitive private attributes of the FL system — specifically, the overlay network topology (which participants are connected to which) — solely by analyzing model behavioral traces. This is a property/attribute inference attack on the FL system, which falls under ML03's scope of recovering private information from model behavior. The adversary does not reconstruct raw training data, but the mechanism (exploiting model traces to infer sensitive structural attributes of participants) is the defining characteristic of model inversion/property inference.