defense 2025

Value-Aligned Prompt Moderation via Zero-Shot Agentic Rewriting for Safe Image Generation

Xin Zhao 1,2,3, Xiaojun Chen 1,2,3, Bingshan Liu 1,2,3, Zeyao Liu 1,2,3, Zhendong Zhao 1,2,3, Xiaoyan Gu 1,2,3

1 citations · 34 references · arXiv

α

Published on arXiv

2511.11693

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

VALOR reduces unsafe outputs by up to 100% across adversarial, ambiguous, and value-sensitive prompt categories while preserving prompt usefulness and image quality

VALOR (Value-Aligned LLM-Overseen Rewriter)

Novel technique introduced


Generative vision-language models like Stable Diffusion demonstrate remarkable capabilities in creative media synthesis, but they also pose substantial risks of producing unsafe, offensive, or culturally inappropriate content when prompted adversarially. Current defenses struggle to align outputs with human values without sacrificing generation quality or incurring high costs. To address these challenges, we introduce VALOR (Value-Aligned LLM-Overseen Rewriter), a modular, zero-shot agentic framework for safer and more helpful text-to-image generation. VALOR integrates layered prompt analysis with human-aligned value reasoning: a multi-level NSFW detector filters lexical and semantic risks; a cultural value alignment module identifies violations of social norms, legality, and representational ethics; and an intention disambiguator detects subtle or indirect unsafe implications. When unsafe content is detected, prompts are selectively rewritten by a large language model under dynamic, role-specific instructions designed to preserve user intent while enforcing alignment. If the generated image still fails a safety check, VALOR optionally performs a stylistic regeneration to steer the output toward a safer visual domain without altering core semantics. Experiments across adversarial, ambiguous, and value-sensitive prompts show that VALOR significantly reduces unsafe outputs by up to 100.00% while preserving prompt usefulness and creativity. These results highlight VALOR as a scalable and effective approach for deploying safe, aligned, and helpful image generation systems in open-world settings.


Key Contributions

  • Multi-level NSFW detection pipeline combining lexical filtering, semantic risk analysis, cultural value alignment, and intent disambiguation
  • Zero-shot agentic LLM-based prompt rewriter guided by dynamic role-specific system prompts that preserve user intent while enforcing safety alignment
  • Two-stage safety pipeline: prompt-level moderation followed by image-level safety verification with optional stylistic regeneration

🛡️ Threat Analysis


Details

Domains
multimodalgenerativenlp
Model Types
diffusionllmvlm
Threat Tags
inference_timeblack_box
Applications
text-to-image generationcontent moderationsafe image synthesis