attack 2026

Extended to Reality: Prompt Injection in 3D Environments

Zhuoheng Li , Ying Chen

0 citations · 52 references · arXiv (Cornell University)

α

Published on arXiv

2602.07104

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

PI3D successfully causes multiple MLLMs to follow injected instructions from physically placed text-bearing objects across diverse camera trajectories, and existing prompt injection defenses are insufficient to mitigate the attack.

PI3D

Novel technique introduced


Multimodal large language models (MLLMs) have advanced the capabilities to interpret and act on visual input in 3D environments, empowering diverse applications such as robotics and situated conversational agents. When MLLMs reason over camera-captured views of the physical world, a new attack surface emerges: an attacker can place text-bearing physical objects in the environment to override MLLMs' intended task. While prior work has studied prompt injection in the text domain and through digitally edited 2D images, it remains unclear how these attacks function in 3D physical environments. To bridge the gap, we introduce PI3D, a prompt injection attack against MLLMs in 3D environments, realized through text-bearing physical object placement rather than digital image edits. We formulate and solve the problem of identifying an effective 3D object pose (position and orientation) with injected text, where the attacker's goal is to induce the MLLM to perform the injected task while ensuring that the object placement remains physically plausible. Experiments demonstrate that PI3D is an effective attack against multiple MLLMs under diverse camera trajectories. We further evaluate existing defenses and show that they are insufficient to defend against PI3D.


Key Contributions

  • PI3D: a prompt injection attack realized through physical text-bearing object placement in 3D environments rather than digital image editing
  • Experience-guided planner that leverages a memory of past placements to efficiently identify effective 3D object poses (position and orientation) for injected text
  • Evaluation in virtual and real-world 3D environments across diverse camera trajectories, demonstrating insufficiency of existing defenses

🛡️ Threat Analysis

Input Manipulation Attack

The attack delivers adversarial content via physical objects captured through camera views — strategically crafted visual inputs to VLMs that manipulate model outputs. An experience-guided planner optimizes 3D object pose to ensure the adversarial text remains effective across viewpoints, constituting adversarial visual input design targeting VLM inference.


Details

Domains
visionmultimodal
Model Types
vlmmultimodal
Threat Tags
black_boxphysicaldigitalinference_timetargeted
Datasets
custom virtual 3D environmentsreal-world 3D scenes
Applications
roboticssituated conversational agentsextended reality (xr) interfaces3d scene understanding