attack 2026

Structure-Aware Distributed Backdoor Attacks in Federated Learning

Wang Jian 1,2, Shen Hong 1,3, Ke Wei 1, Liu Xue Hua 2

0 citations

α

Published on arXiv

2603.03865

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Networks with multi-path feature fusion amplify fractal backdoor perturbations even at low poisoning ratios, and SCC reliably predicts attack success rate across different model architectures.

TFI (Structure-aware Fractal perturbation Injection framework)

Novel technique introduced


While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing studies on backdoor attacks in federated learning mainly focus on trigger design or poisoning strategies, typically assuming that identical perturbations behave similarly across different model architectures. This assumption overlooks the impact of model structure on perturbation effectiveness. From a structure-aware perspective, this paper analyzes the coupling relationship between model architectures and backdoor perturbations. We introduce two metrics, Structural Responsiveness Score (SRS) and Structural Compatibility Coefficient (SCC), to measure a model's sensitivity to perturbations and its preference for fractal perturbations. Based on these metrics, we develop a structure-aware fractal perturbation injection framework (TFI) to study the role of architectural properties in the backdoor injection process. Experimental results show that model architecture significantly influences the propagation and aggregation of perturbations. Networks with multi-path feature fusion can amplify and retain fractal perturbations even under low poisoning ratios, while models with low structural compatibility constrain their effectiveness. Further analysis reveals a strong correlation between SCC and attack success rate, suggesting that SCC can predict perturbation survivability. These findings highlight that backdoor behaviors in federated learning depend not only on perturbation design or poisoning intensity but also on the interaction between model architecture and aggregation mechanisms, offering new insights for structure-aware defense design.


Key Contributions

  • Introduction of two novel metrics — Structural Responsiveness Score (SRS) and Structural Compatibility Coefficient (SCC) — to quantify a model's sensitivity to perturbations and its preference for fractal perturbations
  • Structure-aware fractal perturbation injection framework (TFI) that exploits architectural properties (e.g., multi-path feature fusion) to amplify and retain backdoor perturbations even at low poisoning ratios
  • Empirical finding that SCC strongly correlates with attack success rate, enabling prediction of perturbation survivability under federated aggregation

🛡️ Threat Analysis

Model Poisoning

The paper proposes TFI, a structure-aware fractal perturbation injection framework that embeds hidden backdoor triggers in federated learning models. The attack is trigger-based and targeted (normal behavior on clean inputs, attacker-specified mispredictions when the trigger fires), which is the defining characteristic of backdoors/trojans. The distributed injection approach (splitting triggers across clients) further places this squarely in ML10.


Details

Domains
visionfederated-learning
Model Types
cnntransformerfederated
Threat Tags
training_timetargetedgrey_boxdigital
Applications
federated learningimage recognition