Lingzhi Yuan

Papers in Database (1)

defense arXiv Jan 7, 2025 · Jan 2025

PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models

Lingzhi Yuan, Xinfeng Li, Chejian Xu et al. · University of Maryland · Nanyang Technological University +2 more

Defends text-to-image models against NSFW prompt misuse via optimized safety soft prompts mimicking LLM system prompts

Prompt Injection visiongenerative
PDF