Jiacheng Hou

Papers in Database (1)

attack arXiv Feb 10, 2026 · 8w ago

When the Prompt Becomes Visual: Vision-Centric Jailbreak Attacks for Large Image Editing Models

Jiacheng Hou, Yining Sun, Ruochong Jin et al. · Tsinghua University · Peng Cheng Laboratory +1 more

Visual-only jailbreak attack on image editing VLMs encodes malicious instructions via marks and arrows, achieving 80.9% attack success on commercial models

Prompt Injection visionmultimodalgenerative
PDF Code