BlackMirror: Black-Box Backdoor Detection for Text-to-Image Models via Instruction-Response Deviation
Feiran Li 1,2, Qianqian Xu 3, Shilong Bao 2, Zhiyong Yang 2, Xilin Zhao 4, Xiaochun Cao 5, Qingming Huang 2,6
1 Institute of Information Engineering, Chinese Academy of Sciences
2 University of Chinese Academy of Sciences
3 Institute of Computing Technology, Chinese Academy of Sciences
Published on arXiv
2603.05921
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
BlackMirror achieves accurate backdoor detection across a wide range of text-to-image backdoor attacks under black-box settings, generalizing to visually diverse backdoored generations where image-similarity baselines fail.
BlackMirror
Novel technique introduced
This paper investigates the challenging task of detecting backdoored text-to-image models under black-box settings and introduces a novel detection framework BlackMirror. Existing approaches typically rely on analyzing image-level similarity, under the assumption that backdoor-triggered generations exhibit strong consistency across samples. However, they struggle to generalize to recently emerging backdoor attacks, where backdoored generations can appear visually diverse. BlackMirror is motivated by an observation: across backdoor attacks, {only partial semantic patterns within the generated image are steadily manipulated, while the rest of the content remains diverse or benign. Accordingly, BlackMirror consists of two components: MirrorMatch, which aligns visual patterns with the corresponding instructions to detect semantic deviations; and MirrorVerify, which evaluates the stability of these deviations across varied prompts to distinguish true backdoor behavior from benign responses. BlackMirror is a general, training-free framework that can be deployed as a plug-and-play module in Model-as-a-Service (MaaS) applications. Comprehensive experiments demonstrate that BlackMirror achieves accurate detection across a wide range of attacks. Code is available at https://github.com/Ferry-Li/BlackMirror.
Key Contributions
- Observation that backdoor attacks on text-to-image models cause only partial, stable semantic deviations rather than full visual consistency, enabling more general detection
- MirrorMatch: aligns visual patterns with text instructions to detect semantic deviations introduced by backdoor triggers
- MirrorVerify: validates stability of detected deviations across varied prompts to distinguish true backdoor behavior from benign variation, forming a training-free plug-and-play detection framework
🛡️ Threat Analysis
Paper directly defends against backdoor/trojan attacks in text-to-image models — detects trigger-induced hidden malicious behavior by analyzing partial semantic manipulation in generated outputs. Backdoor detection is explicitly within ML10 scope.