attack 2026

Adversarial Prompt Injection Attack on Multimodal Large Language Models

Meiwen Ding , Song Xia , Chenqi Kong , Xudong Jiang

0 citations

α

Published on arXiv

2603.29418

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Demonstrates superior attack success against multiple closed-source MLLMs compared to existing prompt injection methods

Adversarial Visual Prompt Injection

Novel technique introduced


Although multimodal large language models (MLLMs) are increasingly deployed in real-world applications, their instruction-following behavior leaves them vulnerable to prompt injection attacks. Existing prompt injection methods predominantly rely on textual prompts or perceptible visual prompts that are observable by human users. In this work, we study imperceptible visual prompt injection against powerful closed-source MLLMs, where adversarial instructions are embedded in the visual modality. Our method adaptively embeds the malicious prompt into the input image via a bounded text overlay to provide semantic guidance. Meanwhile, the imperceptible visual perturbation is iteratively optimized to align the feature representation of the attacked image with those of the malicious visual and textual targets at both coarse- and fine-grained levels. Specifically, the visual target is instantiated as a text-rendered image and progressively refined during optimization to more faithfully represent the desired semantics and improve transferability. Extensive experiments on two multimodal understanding tasks across multiple closed-source MLLMs demonstrate the superior performance of our approach compared to existing methods.


Key Contributions

  • Novel imperceptible visual prompt injection method using adaptive text overlay and iterative perturbation optimization
  • Dual-level feature alignment (coarse and fine-grained) between attacked image and malicious visual/textual targets
  • Progressive refinement of text-rendered visual targets during optimization to improve semantic fidelity and transferability

🛡️ Threat Analysis

Input Manipulation Attack

Uses gradient-based visual perturbations to craft adversarial images that manipulate MLLM behavior at inference time - this is adversarial example generation via imperceptible image perturbations.


Details

Domains
multimodalvisionnlp
Model Types
vlmmultimodaltransformer
Threat Tags
black_boxinference_timetargeteddigital
Applications
multimodal understandingvision-language models