attack arXiv Mar 1, 2026 · 5w ago
Jiayao Wang, Yiping Zhang, Mohammad Maruf Hasan et al. · Yangzhou University · Chaohu University +1 more
Backdoor attack on self-supervised diffusion models hijacks PCA-space representations to steer generation toward attacker-specified targets on trigger activation
Model Poisoning visiongenerative
Self-supervised diffusion models learn high-quality visual representations via latent space denoising. However, their representation layer poses a distinct threat: unlike traditional attacks targeting generative outputs, its unconstrained latent semantic space allows for stealthy backdoors, permitting malicious control upon triggering. In this paper, we propose BadRSSD, the first backdoor attack targeting the representation layer of self-supervised diffusion models. Specifically, it hijacks the semantic representations of poisoned samples with triggers in Principal Component Analysis (PCA) space toward those of a target image, then controls the denoising trajectory during diffusion by applying coordinated constraints across latent, pixel, and feature distribution spaces to steer the model toward generating the specified target. Additionally, we integrate representation dispersion regularization into the constraint framework to maintain feature space uniformity, significantly enhancing attack stealth. This approach preserves normal model functionality (high utility) while achieving precise target generation upon trigger activation (high specificity). Experiments on multiple benchmark datasets demonstrate that BadRSSD substantially outperforms existing attacks in both FID and MSE metrics, reliably establishing backdoors across different architectures and configurations, and effectively resisting state-of-the-art backdoor defenses.
diffusion Yangzhou University · Chaohu University · University of Washington
attack arXiv Mar 3, 2026 · 4w ago
Jiayao Wang, Mohammad Maruf Hasan, Yiping Zhang et al. · Yangzhou University · Chaohu University +1 more
Proposes a stealthy backdoor attack on SSL encoders via collaborative optimization of dynamic trigger generation and feature space manipulation
Model Poisoning vision
Self-Supervised Learning (SSL) has emerged as a significant paradigm in representation learning thanks to its ability to learn without extensive labeled data, its strong generalization capabilities, and its potential for privacy preservation. However, recent research reveals that SSL models are also vulnerable to backdoor attacks. Existing backdoor attack methods in the SSL context commonly suffer from issues such as high detectability of triggers, feature entanglement, and pronounced out-of-distribution properties in poisoned samples, all of which compromises attack effectiveness and stealthiness. To that, we propose a Dynamic Stealthy Backdoor Attack (DSBA) backed by a new technique we term Collaborative Optimization. This method decouples the attack process into two collaborative optimization layers: the outer-layer optimization trains a backdoor encoder responsible for global feature space remodeling, aiming to achieve precise backdoor implantation while preserving core functionality; meanwhile, the inner-layer optimization employs a dynamically optimized generator to adaptively produce optimally concealed triggers for individual samples, achieving coordinated concealment across feature space and visual space. We also introduce multiple loss functions to dynamically balance attack performance and stealthiness, in which we employ an adaptive weight scheduling mechanism to enhance training stability. Extensive experiments on various mainstream SSL algorithms and five public datasets demonstrate that: (i) DSBA significantly enhances Attack Success Rate (ASR) and stealthiness while maintaining downstream task accuracy; and (ii) DSBA exhibits superior robustness against existing mainstream defense methods.
cnn transformer Yangzhou University · Chaohu University · University of Washington