attack 2026

Physical Prompt Injection Attacks on Large Vision-Language Models

Chen Ling 1, Kai Hu 1, Hangcheng Liu 2, Xingshuo Han 3, Tianwei Zhang 2, Changhai Ou 1

0 citations · 50 references · arXiv

α

Published on arXiv

2601.17383

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

PPIA achieves up to 98% attack success rate across 10 state-of-the-art LVLMs using physical typographic prompts, remaining robust under real-world variation in distance, viewpoint, and illumination.

PPIA (Physical Prompt Injection Attack)

Novel technique introduced


Large Vision-Language Models (LVLMs) are increasingly deployed in real-world intelligent systems for perception and reasoning in open physical environments. While LVLMs are known to be vulnerable to prompt injection attacks, existing methods either require access to input channels or depend on knowledge of user queries, assumptions that rarely hold in practical deployments. We propose the first Physical Prompt Injection Attack (PPIA), a black-box, query-agnostic attack that embeds malicious typographic instructions into physical objects perceivable by the LVLM. PPIA requires no access to the model, its inputs, or internal pipeline, and operates solely through visual observation. It combines offline selection of highly recognizable and semantically effective visual prompts with strategic environment-aware placement guided by spatiotemporal attention, ensuring that the injected prompts are both perceivable and influential on model behavior. We evaluate PPIA across 10 state-of-the-art LVLMs in both simulated and real-world settings on tasks including visual question answering, planning, and navigation, PPIA achieves attack success rates up to 98%, with strong robustness under varying physical conditions such as distance, viewpoint, and illumination. Our code is publicly available at https://github.com/2023cghacker/Physical-Prompt-Injection-Attack.


Key Contributions

  • First black-box, query-agnostic Physical Prompt Injection Attack (PPIA) on LVLMs that requires no model access and operates solely through visual observation of the physical environment
  • Offline selection methodology for highly recognizable and semantically effective visual prompts combined with spatiotemporal-attention-guided environment-aware placement strategy
  • Comprehensive evaluation across 10 state-of-the-art LVLMs in simulated and real-world settings covering VQA, planning, and navigation tasks, demonstrating robustness under varying distance, viewpoint, and illumination

🛡️ Threat Analysis

Input Manipulation Attack

The attack operates through the VISUAL input channel of VLMs — strategically crafted physical objects (typographic content placed in the environment) manipulate VLM outputs at inference time, analogous to adversarial content manipulation of LLM-integrated systems via external data injection. The attack vector is visual input crafting designed to influence model behavior.


Details

Domains
visionmultimodal
Model Types
vlm
Threat Tags
black_boxinference_timetargetedphysical
Applications
visual question answeringautonomous navigationembodied agent planning