PEPPER: Perception-Guided Perturbation for Robust Backdoor Defense in Text-to-Image Diffusion Models
Oscar Chew 1, Po-Yi Lu 2, Jayden Lin 3, Kuan-Hao Huang 1, Hsuan-Tien Lin 2
Published on arXiv
2511.16830
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
PEPPER substantially reduces attack success rates against text encoder-based backdoor attacks (Rickrolling, Textual Inversion) while preserving generation quality, and improves robustness when combined with T2IShield or UFID.
PEPPER
Novel technique introduced
Recent studies show that text to image (T2I) diffusion models are vulnerable to backdoor attacks, where a trigger in the input prompt can steer generation toward harmful or unintended content. To address this, we introduce PEPPER (PErcePtion Guided PERturbation), a backdoor defense that rewrites the caption into a semantically distant yet visually similar caption while adding unobstructive elements. With this rewriting strategy, PEPPER disrupt the trigger embedded in the input prompt, dilute the influence of trigger tokens and thereby achieve enhanced robustness. Experiments show that PEPPER is particularly effective against text encoder based attacks, substantially reducing attack success while preserving generation quality. Beyond this, PEPPER can be paired with any existing defenses yielding consistently stronger and generalizable robustness than any standalone method. Our code will be released on Github.
Key Contributions
- PEPPER: a training-free, inference-time backdoor defense that rewrites prompts to be semantically distant yet visually similar, disrupting embedded trigger tokens in T2I diffusion models.
- Plug-and-play design that can be paired with existing defenses (T2IShield, UFID) to yield consistently stronger and more generalizable robustness across diverse backdoor attack families.
- Demonstrates that intentional prompt lengthening with unobtrusive relevant details dilutes trigger influence, especially against text-encoder-based attacks (Rickrolling, Textual Inversion).
🛡️ Threat Analysis
Paper's primary contribution is defending against backdoor/trojan attacks in T2I diffusion models, where a trigger in the input prompt steers generation toward attacker-chosen content. PEPPER is a defense that disrupts trigger tokens embedded in prompts without model retraining.