α

Published on arXiv

2603.05921

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

BlackMirror achieves accurate backdoor detection across a wide range of text-to-image backdoor attacks under black-box settings, generalizing to visually diverse backdoored generations where image-similarity baselines fail.

BlackMirror

Novel technique introduced


This paper investigates the challenging task of detecting backdoored text-to-image models under black-box settings and introduces a novel detection framework BlackMirror. Existing approaches typically rely on analyzing image-level similarity, under the assumption that backdoor-triggered generations exhibit strong consistency across samples. However, they struggle to generalize to recently emerging backdoor attacks, where backdoored generations can appear visually diverse. BlackMirror is motivated by an observation: across backdoor attacks, {only partial semantic patterns within the generated image are steadily manipulated, while the rest of the content remains diverse or benign. Accordingly, BlackMirror consists of two components: MirrorMatch, which aligns visual patterns with the corresponding instructions to detect semantic deviations; and MirrorVerify, which evaluates the stability of these deviations across varied prompts to distinguish true backdoor behavior from benign responses. BlackMirror is a general, training-free framework that can be deployed as a plug-and-play module in Model-as-a-Service (MaaS) applications. Comprehensive experiments demonstrate that BlackMirror achieves accurate detection across a wide range of attacks. Code is available at https://github.com/Ferry-Li/BlackMirror.


Key Contributions

  • Observation that backdoor attacks on text-to-image models cause only partial, stable semantic deviations rather than full visual consistency, enabling more general detection
  • MirrorMatch: aligns visual patterns with text instructions to detect semantic deviations introduced by backdoor triggers
  • MirrorVerify: validates stability of detected deviations across varied prompts to distinguish true backdoor behavior from benign variation, forming a training-free plug-and-play detection framework

🛡️ Threat Analysis

Model Poisoning

Paper directly defends against backdoor/trojan attacks in text-to-image models — detects trigger-induced hidden malicious behavior by analyzing partial semantic manipulation in generated outputs. Backdoor detection is explicitly within ML10 scope.


Details

Domains
visiongenerativemultimodal
Model Types
diffusionmultimodal
Threat Tags
black_boxtraining_timetargeted
Applications
text-to-image generationmodel-as-a-service (maas)