attack 2025

Unveiling Hidden Threats: Using Fractal Triggers to Boost Stealthiness of Distributed Backdoor Attacks in Federated Learning

Jian Wang 1, Hong Shen 2, Chan-Tong Lam 1

0 citations · arXiv

α

Published on arXiv

2511.09252

Model Poisoning

OWASP ML Top 10 — ML10

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

FTDBA achieves 92.3% attack success rate using only 62.4% of the poisoning volume required by traditional DBA, while reducing detection rate by 22.8% and KL divergence by 41.2%.

FTDBA (Fractal-Triggered Distributed Backdoor Attack)

Novel technique introduced


Traditional distributed backdoor attacks (DBA) in federated learning improve stealthiness by decomposing global triggers into sub-triggers, which however requires more poisoned data to maintian the attck strength and hence increases the exposure risk. To overcome this defect, This paper proposes a novel method, namely Fractal-Triggerred Distributed Backdoor Attack (FTDBA), which leverages the self-similarity of fractals to enhance the feature strength of sub-triggers and hence significantly reduce the required poisoning volume for the same attack strength. To address the detectability of fractal structures in the frequency and gradient domains, we introduce a dynamic angular perturbation mechanism that adaptively adjusts perturbation intensity across the training phases to balance efficiency and stealthiness. Experiments show that FTDBA achieves a 92.3\% attack success rate with only 62.4\% of the poisoning volume required by traditional DBA methods, while reducing the detection rate by 22.8\% and KL divergence by 41.2\%. This study presents a low-exposure, high-efficiency paradigm for federated backdoor attacks and expands the application of fractal features in adversarial sample generation.


Key Contributions

  • Fractal self-similarity is exploited to construct distributed backdoor sub-triggers that retain feature strength comparable to global triggers, reducing required poisoning volume by ~38% compared to standard DBA.
  • Dynamic angular perturbation mechanism that adaptively adjusts perturbation intensity across training phases to mask fractal regularity in frequency and gradient domains.
  • Empirical validation showing 92.3% attack success rate at 62.4% of DBA's poisoning volume, with a 22.8% reduction in detection rate and 41.2% reduction in KL divergence.

🛡️ Threat Analysis

Data Poisoning Attack

The attack vector is data poisoning across distributed FL clients; the paper's key innovation directly addresses the trade-off between poisoning volume and attack strength, making the data poisoning mechanism itself a primary focus of the contribution.

Model Poisoning

Proposes FTDBA, a novel distributed backdoor attack embedding fractal-based trigger patterns across federated learning clients that activate targeted misclassification — the core contribution is a backdoor/trojan insertion technique with a novel trigger design and stealthiness mechanism.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
training_timetargeteddigital
Applications
federated learning systemsimage classification