attack 2025

Data-Chain Backdoor: Do You Trust Diffusion Models as Generative Data Supplier?

Junchi Lu 1, Xinke Li 2, Yuheng Liu 1, Qi Alfred Chen 1

0 citations · 29 references · arXiv

α

Published on arXiv

2512.15769

Model Poisoning

OWASP ML Top 10 — ML10

AI Supply Chain Attacks

OWASP ML Top 10 — ML06

Key Finding

Backdoor triggers embedded in compromised diffusion models are consistently reproduced in synthetic augmentation data and inherited by downstream classifiers with attack success rates comparable to conventional direct data-poisoning backdoor attacks, even under clean-label conditions.

Data-Chain Backdoor (DCB)

Novel technique introduced


The increasing use of generative models such as diffusion models for synthetic data augmentation has greatly reduced the cost of data collection and labeling in downstream perception tasks. However, this new data source paradigm may introduce important security concerns. Publicly available generative models are often reused without verification, raising a fundamental question of their safety and trustworthiness. This work investigates backdoor propagation in such emerging generative data supply chain, namely, Data-Chain Backdoor (DCB). Specifically, we find that open-source diffusion models can become hidden carriers of backdoors. Their strong distribution-fitting ability causes them to memorize and reproduce backdoor triggers in generation, which are subsequently inherited by downstream models, resulting in severe security risks. This threat is particularly concerning under clean-label attack scenarios, as it remains effective while having negligible impact on the utility of the synthetic data. We study two attacker choices to obtain a backdoor-carried generator, training from scratch and fine-tuning. While naive fine-tuning leads to weak inheritance of the backdoor, we find that novel designs in the loss objectives and trigger processing can substantially improve the generator's ability to preserve trigger patterns, making fine-tuning a low-cost attack path. We evaluate the effectiveness of DCB under the standard augmentation protocol and further assess data-scarce settings. Across multiple trigger types, we observe that the trigger pattern can be consistently retained in the synthetic data with attack efficacy comparable to the conventional backdoor attack.


Key Contributions

  • First threat model exploiting generative data supply chains: a backdoored open-source diffusion model silently propagates triggers into synthetic training data, which downstream classifiers inherit without any direct manipulation of the victim's training pipeline.
  • Novel loss objectives and trigger processing techniques that substantially improve a fine-tuned diffusion model's ability to preserve and reproduce backdoor trigger patterns in generated images, enabling a low-cost attack path.
  • Empirical demonstration that DCB is effective under clean-label attack scenarios across multiple trigger types and data-scarce settings, with attack efficacy comparable to conventional direct data-poisoning backdoor attacks.

🛡️ Threat Analysis

AI Supply Chain Attacks

The explicit attack vector is the generative data supply chain: publicly available open-source diffusion models are compromised and reused without verification by victims, making this a supply chain attack. The paper explicitly frames the generative data pipeline as an attack surface and the diffusion model hub ecosystem as the distribution channel.

Model Poisoning

Core contribution is embedding hidden backdoor triggers into diffusion models (via training from scratch or fine-tuning with novel loss objectives) such that generated synthetic data carries the triggers, which downstream classifiers then inherit — a classic trojan/backdoor attack with a novel propagation vector.


Details

Domains
visiongenerative
Model Types
diffusioncnntransformer
Threat Tags
training_timetargeteddigitalgrey_box
Datasets
CIFAR-10ImageNet
Applications
image classificationsynthetic data augmentationfew-shot and low-shot learningmedical imaging classificationobject detection