Wei Dong

Papers in Database (2)

defense arXiv Jan 7, 2025 · Jan 2025

PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models

Lingzhi Yuan, Xinfeng Li, Chejian Xu et al. · University of Maryland · Nanyang Technological University +2 more

Defends text-to-image models against NSFW prompt misuse via optimized safety soft prompts mimicking LLM system prompts

Prompt Injection visiongenerative
PDF
defense arXiv Aug 28, 2025 · Aug 2025

Token Buncher: Shielding LLMs from Harmful Reinforcement Learning Fine-Tuning

Weitao Feng, Lixu Wang, Tianyi Wei et al. · Nanyang Technological University · A*STAR +1 more

Defends LLM safety alignment against RL fine-tuning attacks by suppressing response entropy via TokenBuncher

Transfer Learning Attack Prompt Injection nlpreinforcement-learning
PDF