attack 2025

FLAT: Latent-Driven Arbitrary-Target Backdoor Attacks in Federated Learning

Tuan Nguyen , Khoa D Doan , Kok-Seng Wong

0 citations

α

Published on arXiv

2508.04064

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

FLAT achieves high attack success across multiple datasets and arbitrary target classes while evading advanced FL defenses, highlighting the inadequacy of current detection mechanisms against latent-driven multi-target backdoor threats.

FLAT (FL Arbitrary-Target Attack)

Novel technique introduced


Federated learning (FL) is vulnerable to backdoor attacks, yet most existing methods are limited by fixed-pattern or single-target triggers, making them inflexible and easier to detect. We propose FLAT (FL Arbitrary-Target Attack), a novel backdoor attack that leverages a latent-driven conditional autoencoder to generate diverse, target-specific triggers as needed. By introducing a latent code, FLAT enables the creation of visually adaptive and highly variable triggers, allowing attackers to select arbitrary targets without retraining and to evade conventional detection mechanisms. Our approach unifies attack success, stealth, and diversity within a single framework, introducing a new level of flexibility and sophistication to backdoor attacks in FL. Extensive experiments show that FLAT achieves high attack success and remains robust against advanced FL defenses. These results highlight the urgent need for new defense strategies to address latent-driven, multi-target backdoor threats in federated settings.


Key Contributions

  • Introduces latent-driven trigger diversity via a spatially-injected latent code in a conditional autoencoder, enabling a family of diverse, visually adaptive triggers for any target class without retraining.
  • Proposes FLAT, the first federated backdoor attack unifying latent-driven diversity, conditional generation, and multi-target flexibility in a single framework.
  • Demonstrates through extensive experiments that FLAT achieves the highest attack success rate and stealth while remaining resilient to advanced FL defenses (e.g., FLAME, DeepSight).

🛡️ Threat Analysis

Model Poisoning

FLAT is a backdoor/trojan attack that embeds hidden, trigger-activated behavior in the global FL model — malicious clients inject target-specific poisoned updates using a latent-driven conditional autoencoder, enabling arbitrary target class selection without retraining. The attack produces trigger-based misclassification while preserving normal model behavior, which is the canonical ML10 threat.


Details

Domains
visionfederated-learning
Model Types
federatedcnngenerative
Threat Tags
training_timetargeteddigitalgrey_box
Applications
image classificationfederated learning