defense arXiv Nov 20, 2025 · Nov 2025
Oscar Chew, Po-Yi Lu, Jayden Lin et al. · Texas A&M University · National Taiwan University +1 more
Defends T2I diffusion models from backdoor triggers by rewriting prompts to be semantically distant yet visually similar, disrupting trigger tokens at inference time.
Model Poisoning visionnlpgenerative
Recent studies show that text to image (T2I) diffusion models are vulnerable to backdoor attacks, where a trigger in the input prompt can steer generation toward harmful or unintended content. To address this, we introduce PEPPER (PErcePtion Guided PERturbation), a backdoor defense that rewrites the caption into a semantically distant yet visually similar caption while adding unobstructive elements. With this rewriting strategy, PEPPER disrupt the trigger embedded in the input prompt, dilute the influence of trigger tokens and thereby achieve enhanced robustness. Experiments show that PEPPER is particularly effective against text encoder based attacks, substantially reducing attack success while preserving generation quality. Beyond this, PEPPER can be paired with any existing defenses yielding consistently stronger and generalizable robustness than any standalone method. Our code will be released on Github.
diffusion transformer Texas A&M University · National Taiwan University · University of Michigan