defense 2025

Coward: Collision-based Watermark for Proactive Federated Backdoor Detection

Wenjie Li 1, Siying Gu 2, Yiming Li 3, Kangjie Chen 3, Zhili Chen 2, Tianwei Zhang 3, Shu-Tao Xia 1, Dacheng Tao 3

0 citations

α

Published on arXiv

2508.02115

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Coward achieves state-of-the-art backdoor detection in federated learning while significantly reducing misjudgments caused by OOD prediction bias, outperforming both passive and existing proactive detection baselines.

Coward

Novel technique introduced


Backdoor detection is currently the mainstream defense against backdoor attacks in federated learning (FL), where a small number of malicious clients can upload poisoned updates to compromise the federated global model. Existing backdoor detection techniques fall into two categories, passive and proactive, depending on whether the server proactively intervenes in the training process. However, both of them have inherent limitations in practice: passive detection methods are disrupted by common non-i.i.d. data distributions and random participation of FL clients, whereas current proactive detection methods are misled by an inevitable out-of-distribution (OOD) bias because they rely on backdoor coexistence effects. To address these issues, we introduce a novel proactive detection method dubbed Coward, inspired by our discovery of multi-backdoor collision effects, in which consecutively planted, distinct backdoors significantly suppress earlier ones. Correspondingly, we modify the federated global model by injecting a carefully designed backdoor-collided watermark, implemented via regulated dual-mapping learning on OOD data. This design not only enables an inverted detection paradigm compared to existing proactive methods, thereby naturally counteracting the adverse impact of OOD prediction bias, but also introduces a low-disruptive training intervention that inherently limits the strength of OOD bias, leading to significantly fewer misjudgments. Extensive experiments on benchmark datasets show that Coward achieves state-of-the-art detection performance, effectively alleviates OOD prediction bias, and remains robust against potential adaptive attacks. The code for our method is available at https://github.com/still2009/cowardFL.


Key Contributions

  • Discovery of multi-backdoor collision effects: consecutively planted distinct backdoors significantly suppress earlier ones, enabling an inverted detection paradigm
  • Coward: a proactive FL backdoor detection method that injects a backdoor-collided watermark via regulated dual-mapping learning on OOD data, counteracting OOD prediction bias that plagues existing proactive methods
  • State-of-the-art detection performance on benchmark datasets with robustness against adaptive attacks and resilience under non-i.i.d. data distributions

🛡️ Threat Analysis

Model Poisoning

Directly defends against trigger-based backdoor attacks in federated learning, where malicious clients upload poisoned model updates to embed hidden behavior in the global model. The defense (Coward) is a proactive backdoor detection method that injects a 'backdoor-collided watermark' into the global model to identify malicious clients, squarely targeting the backdoor/trojan threat.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
training_timetargeted
Datasets
CIFAR-10MNIST
Applications
federated learningdistributed ml systems