Scaling Exposes the Trigger: Input-Level Backdoor Detection in Text-to-Image Diffusion Models via Cross-Attention Scaling
Zida Li , Jun Li , Yuzhe Sha , Ziqiang Li , Lizhi Xiong , Zhangjie Fu
Published on arXiv
2604.12446
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
Improves AUROC by 9.1% and accuracy by 6.5% over best baseline, with particularly strong gains under stealthy implicit-trigger scenarios
SET
Novel technique introduced
Text-to-image (T2I) diffusion models have achieved remarkable success in image synthesis, but their reliance on large-scale data and open ecosystems introduces serious backdoor security risks. Existing defenses, particularly input-level methods, are more practical for deployment but often rely on observable anomalies that become unreliable under stealthy, semantics-preserving trigger designs. As modern backdoor attacks increasingly embed triggers into natural inputs, these methods degrade substantially, raising a critical question: can more stable, implicit, and trigger-agnostic differences between benign and backdoor inputs be exploited for detection? In this work, we address this challenge from an active probing perspective. We introduce controlled scaling perturbations on cross-attention and uncover a novel phenomenon termed Cross-Attention Scaling Response Divergence (CSRD), where benign and backdoor inputs exhibit systematically different response evolution patterns across denoising steps. Building on this insight, we propose SET, an input-level backdoor detection framework that constructs response-offset features under multi-scale perturbations and learns a compact benign response space from a small set of clean samples. Detection is then performed by measuring deviations from this learned space, without requiring prior knowledge of the attack or access to model training. Extensive experiments demonstrate that SET consistently outperforms existing baselines across diverse attack methods, trigger types, and model settings, with particularly strong gains under stealthy implicit-trigger scenarios. Overall, SET improves AUROC by 9.1% and ACC by 6.5% over the best baseline, highlighting its effectiveness and robustness for practical deployment.
Key Contributions
- Discovery of Cross-Attention Scaling Response Divergence (CSRD) phenomenon where benign and backdoor inputs exhibit systematically different response patterns under controlled cross-attention scaling perturbations
- SET framework for input-level backdoor detection that constructs response-offset features under multi-scale perturbations and learns a compact benign response space from clean samples
- Strong performance against stealthy implicit-trigger attacks where existing input-level defenses degrade substantially
🛡️ Threat Analysis
The paper addresses backdoor detection in diffusion models. Backdoor attacks involve embedding hidden, targeted malicious behavior (generating attacker-specified harmful/NSFW content when triggered) that activates only with specific triggers while the model behaves normally otherwise. The paper proposes SET, a defense framework that detects backdoor triggers at the input level by exploiting Cross-Attention Scaling Response Divergence (CSRD) between benign and backdoor inputs.