defense 2026

Inference-Only Prompt Projection for Safe Text-to-Image Generation with TV Guarantees

Minhyuk Lee , Hyekyung Yoon , Myungjoo Kang

0 citations · 42 references · arXiv

α

Published on arXiv

2602.00616

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 16.7–60.0% relative reduction in inappropriate content percentage versus strong model-level alignment baselines while preserving benign prompt-image alignment on COCO near the unaligned reference

Prompt Projection Framework (SPAT)

Novel technique introduced


Text-to-Image (T2I) diffusion models enable high-quality open-ended synthesis, but their real-world deployment demands safeguards that suppress unsafe generations without degrading benign prompt-image alignment. We formalize this tension through a total variation (TV) lens: once the reference conditional distribution is fixed, any nontrivial reduction in unsafe generations necessarily incurs TV deviation from the reference, yielding a principled Safety-Prompt Alignment Trade-off (SPAT). Guided by this view, we propose an inference-only prompt projection framework that selectively intervenes on high-risk prompts via a surrogate objective with verification, mapping them into a tolerance-controlled safe set while leaving benign prompts effectively unchanged, without retraining or fine-tuning the generator. Across four datasets and three diffusion backbones, our approach achieves 16.7-60.0% relative reductions in inappropriate percentage (IP) versus strong model-level alignment baselines, while preserving benign prompt-image alignment on COCO near the unaligned reference.


Key Contributions

  • Formalizes the Safety-Prompt Alignment Trade-off (SPAT) via total variation theory, proving that any nontrivial reduction in unsafe generations necessarily incurs distributional deviation from the reference conditional distribution
  • Proposes an inference-only prompt projection framework that rewrites high-risk prompts via an LLM surrogate and verifies output safety with a VLM, without modifying or retraining the T2I generator
  • Demonstrates 16.7–60.0% relative reduction in inappropriate image generation across four datasets and three diffusion backbones while preserving benign prompt-image alignment near the unaligned reference

🛡️ Threat Analysis


Details

Domains
visiongenerativenlp
Model Types
diffusionvlmllm
Threat Tags
inference_time
Datasets
COCO
Applications
text-to-image generationcontent moderationsafe image synthesis