attack 2026

Semantic-level Backdoor Attack against Text-to-Image Diffusion Models

Tianxin Chen 1, Wenbo Jiang 2, Hongqiao Chen 1, Zhirun Zheng 3, Cheng Huang 1

0 citations · 38 references · arXiv (Cornell University)

α

Published on arXiv

2602.04898

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

SemBD achieves 100% attack success rate while significantly reducing the detection success rates of three state-of-the-art input-level defenses (T2IShield, UFID, NaviT2I)

SemBD

Novel technique introduced


Text-to-image (T2I) diffusion models are widely adopted for their strong generative capabilities, yet remain vulnerable to backdoor attacks. Existing attacks typically rely on fixed textual triggers and single-entity backdoor targets, making them highly susceptible to enumeration-based input defenses and attention-consistency detection. In this work, we propose Semantic-level Backdoor Attack (SemBD), which implants backdoors at the representation level by defining triggers as continuous semantic regions rather than discrete textual patterns. Concretely, SemBD injects semantic backdoors by distillation-based editing of the key and value projection matrices in cross-attention layers, enabling diverse prompts with identical semantic compositions to reliably activate the backdoor attack. To further enhance stealthiness, SemBD incorporates a semantic regularization to prevent unintended activation under incomplete semantics, as well as multi-entity backdoor targets that avoid highly consistent cross-attention patterns. Extensive experiments demonstrate that SemBD achieves a 100% attack success rate while maintaining strong robustness against state-of-the-art input-level defenses.


Key Contributions

  • SemBD: first semantic-level backdoor for T2I diffusion models, defining triggers as continuous semantic compositions (subject/action/object/scene) rather than fixed textual patterns, injected via distillation-based editing of cross-attention key/value matrices
  • Semantic regularization that constrains trigger boundaries to prevent unintended activation under incomplete semantic inputs, preserving normal generation fidelity
  • Multi-entity backdoor targets that diversify cross-attention distributions upon activation, defeating attention-consistency-based defenses (T2IShield, UFID, NaviT2I)

🛡️ Threat Analysis

Model Poisoning

SemBD injects hidden backdoor behavior into T2I diffusion models by modifying cross-attention key/value projection matrices; the model behaves normally on clean prompts and activates only when specific semantic compositions are present — a textbook backdoor/trojan attack.


Details

Domains
visionnlpgenerativemultimodal
Model Types
diffusiontransformer
Threat Tags
white_boxtraining_timetargeteddigital
Applications
text-to-image generation