Chen Ling

h-index: 0 0 citations 3 papers (total)

Papers in Database (1)

attack arXiv Jan 24, 2026 · 10w ago

Physical Prompt Injection Attacks on Large Vision-Language Models

Chen Ling, Kai Hu, Hangcheng Liu et al. · Wuhan University · Nanyang Technological University +1 more

Embeds malicious typographic instructions in physical objects to inject prompts into VLMs, achieving up to 98% attack success across 10 models

Input Manipulation Attack Prompt Injection visionmultimodal
PDF Code