attack 2026

Image-based Prompt Injection: Hijacking Multimodal LLMs through Visually Embedded Adversarial Instructions

Neha Nagaraja 1, Lan Zhang 1, Zhilong Wang 2, Bo Zhang 2, Pawan Patil 2

0 citations · FLLM

α

Published on arXiv

2603.03637

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Most effective IPI configuration achieves 64% attack success rate against GPT-4-turbo under stealth constraints in a black-box setting.

IPI (Image-based Prompt Injection)

Novel technique introduced


Multimodal Large Language Models (MLLMs) integrate vision and text to power applications, but this integration introduces new vulnerabilities. We study Image-based Prompt Injection (IPI), a black-box attack in which adversarial instructions are embedded into natural images to override model behavior. Our end-to-end IPI pipeline incorporates segmentation-based region selection, adaptive font scaling, and background-aware rendering to conceal prompts from human perception while preserving model interpretability. Using the COCO dataset and GPT-4-turbo, we evaluate 12 adversarial prompt strategies and multiple embedding configurations. The results show that IPI can reliably manipulate the output of the model, with the most effective configuration achieving up to 64\% attack success under stealth constraints. These findings highlight IPI as a practical threat in black-box settings and underscore the need for defenses against multimodal prompt injection.


Key Contributions

  • End-to-end IPI pipeline combining segmentation-based region selection, adaptive font scaling, and background-aware rendering to conceal injected instructions from human observers while preserving model readability
  • Systematic evaluation of 12 adversarial prompt strategies and multiple embedding configurations against GPT-4-turbo on the COCO dataset
  • Demonstrates up to 64% attack success rate under stealth constraints in a black-box setting, establishing IPI as a practical threat

🛡️ Threat Analysis

Input Manipulation Attack

Images are strategically crafted inputs (using segmentation-based region selection, adaptive font scaling, background-aware rendering) to manipulate VLM outputs at inference time — fits adversarial content manipulation of LLM-integrated systems where inputs are engineered to override model behavior.


Details

Domains
visionnlpmultimodal
Model Types
vlmllmmultimodal
Threat Tags
black_boxinference_timetargeteddigital
Datasets
COCO
Applications
multimodal ai assistantsvision-language models