attack 2026

BadRSSD: Backdoor Attacks on Regularized Self-Supervised Diffusion Models

Jiayao Wang 1, Yiping Zhang 1, Mohammad Maruf Hasan 1, Xiaoying Lei 1, Jiale Zhang 1, Junwu Zhu 1, Qilin Wu 2, Dongfang Zhao 3

0 citations

α

Published on arXiv

2603.01019

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

BadRSSD substantially outperforms existing backdoor attacks in FID and MSE metrics across multiple architectures while successfully evading state-of-the-art backdoor defenses

BadRSSD

Novel technique introduced


Self-supervised diffusion models learn high-quality visual representations via latent space denoising. However, their representation layer poses a distinct threat: unlike traditional attacks targeting generative outputs, its unconstrained latent semantic space allows for stealthy backdoors, permitting malicious control upon triggering. In this paper, we propose BadRSSD, the first backdoor attack targeting the representation layer of self-supervised diffusion models. Specifically, it hijacks the semantic representations of poisoned samples with triggers in Principal Component Analysis (PCA) space toward those of a target image, then controls the denoising trajectory during diffusion by applying coordinated constraints across latent, pixel, and feature distribution spaces to steer the model toward generating the specified target. Additionally, we integrate representation dispersion regularization into the constraint framework to maintain feature space uniformity, significantly enhancing attack stealth. This approach preserves normal model functionality (high utility) while achieving precise target generation upon trigger activation (high specificity). Experiments on multiple benchmark datasets demonstrate that BadRSSD substantially outperforms existing attacks in both FID and MSE metrics, reliably establishing backdoors across different architectures and configurations, and effectively resisting state-of-the-art backdoor defenses.


Key Contributions

  • First backdoor attack targeting the representation layer of self-supervised diffusion models, manipulating PCA-space semantics of triggered samples toward a target image
  • Coordinated multi-space constraint framework (latent, pixel, and feature distribution) that steers denoising trajectories to generate attacker-specified outputs on trigger activation
  • Representation dispersion regularization integrated into the constraint framework to preserve feature space uniformity, significantly improving attack stealth while resisting SOTA defenses

🛡️ Threat Analysis

Model Poisoning

BadRSSD is a trigger-based backdoor attack: poisoned samples with triggers cause the diffusion model to generate a specific target image while maintaining normal behavior otherwise — the defining characteristic of a neural trojan. The attack embeds hidden behavior in the representation layer activated only by a specific trigger, directly fitting ML10.


Details

Domains
visiongenerative
Model Types
diffusion
Threat Tags
white_boxtraining_timetargeteddigital
Applications
image generationself-supervised representation learning