DTVI: Dual-Stage Textual and Visual Intervention for Safe Text-to-Image Generation
Binhong Tan , Zhaoxin Wang , Handing Wang
Published on arXiv
2603.22041
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves average Defense Success Rate of 94.43% across sexual-category benchmarks and 88.56% across seven unsafe categories while maintaining generation quality on benign prompts
DTVI
Novel technique introduced
Text-to-Image (T2I) diffusion models have demonstrated strong generation ability, but their potential to generate unsafe content raises significant safety concerns. Existing inference-time defense methods typically perform category-agnostic token-level intervention in the text embedding space, which fails to capture malicious semantics distributed across the full token sequence and remains vulnerable to adversarial prompts. In this paper, we propose DTVI, a dual-stage inference-time defense framework for safe T2I generation. Unlike existing methods that intervene on specific token embeddings, our method introduces category-aware sequence-level intervention on the full prompt embedding to better capture distributed malicious semantics, and further attenuates the remaining unsafe influences during the visual generation stage. Experimental results on real-world unsafe prompts, adversarial prompts, and multiple harmful categories show that our method achieves effective and robust defense while preserving reasonable generation quality on benign prompts, obtaining an average Defense Success Rate (DSR) of 94.43% across sexual-category benchmarks and 88.56 across seven unsafe categories, while maintaining generation quality on benign prompts.
Key Contributions
- Category-aware sequence-level intervention on full prompt embeddings to capture distributed malicious semantics across token sequences
- Dual-stage defense framework combining textual intervention and visual-stage attenuation
- Achieves 94.43% defense success rate on sexual content and 88.56% across seven unsafe categories while maintaining benign prompt quality
🛡️ Threat Analysis
The paper addresses adversarial prompts designed to evade safety mechanisms and cause the T2I model to generate harmful content at inference time. The defense intervenes on adversarial inputs before they reach the generation process.