attack arXiv Feb 5, 2026 · 8w ago
Jiayao Wang, Yiping Zhang, Jiale Zhang et al. · Yangzhou University · Jiaxing University +2 more
Proposes a federated SSL backdoor attack using distributed trigger decomposition and attention-driven malicious client collusion to resist aggregation dilution
Model Poisoning Data Poisoning Attack visionfederated-learning
Federated Self-Supervised Learning (FSSL) integrates the privacy advantages of distributed training with the capability of self-supervised learning to leverage unlabeled data, showing strong potential across applications. However, recent studies have shown that FSSL is also vulnerable to backdoor attacks. Existing attacks are limited by their trigger design, which typically employs a global, uniform trigger that is easily detected, gets diluted during aggregation, and lacks robustness in heterogeneous client environments. To address these challenges, we propose the Attention-Driven multi-party Collusion Attack (ADCA). During local pre-training, malicious clients decompose the global trigger to find optimal local patterns. Subsequently, these malicious clients collude to form a malicious coalition and establish a collaborative optimization mechanism within it. In this mechanism, each submits its model updates, and an attention mechanism dynamically aggregates them to explore the best cooperative strategy. The resulting aggregated parameters serve as the initial state for the next round of training within the coalition, thereby effectively mitigating the dilution of backdoor information by benign updates. Experiments on multiple FSSL scenarios and four datasets show that ADCA significantly outperforms existing methods in Attack Success Rate (ASR) and persistence, proving its effectiveness and robustness.
federated transformer cnn Yangzhou University · Jiaxing University · Chaohu University +1 more
attack arXiv Feb 2, 2026 · 9w ago
Jiayao Wang, Yang Song, Zhendong Zhao et al. · Yangzhou University · Chinese Academy of Sciences +3 more
Proposes HPE backdoor attack for federated self-supervised learning using synthetic positive entanglement and selective parameter poisoning to persist through aggregation
Model Poisoning visionfederated-learning
Federated self-supervised learning (FSSL) enables collaborative training of self-supervised representation models without sharing raw unlabeled data. While it serves as a crucial paradigm for privacy-preserving learning, its security remains vulnerable to backdoor attacks, where malicious clients manipulate local training to inject targeted backdoors. Existing FSSL attack methods, however, often suffer from low utilization of poisoned samples, limited transferability, and weak persistence. To address these limitations, we propose a new backdoor attack method for FSSL, namely Hallucinated Positive Entanglement (HPE). HPE first employs hallucination-based augmentation using synthetic positive samples to enhance the encoder's embedding of backdoor features. It then introduces feature entanglement to enforce tight binding between triggers and backdoor samples in the representation space. Finally, selective parameter poisoning and proximity-aware updates constrain the poisoned model within the vicinity of the global model, enhancing its stability and persistence. Experimental results on several FSSL scenarios and datasets show that HPE significantly outperforms existing backdoor attack methods in performance and exhibits strong robustness under various defense mechanisms.
federated transformer Yangzhou University · Chinese Academy of Sciences · Chaohu University +2 more