defense 2026

Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Injection

Zedian Shao 1, Hongbin Liu 2, Yuepeng Hu 2, Neil Zhenqiang Gong 2

0 citations · The 64th Annual Meeting of the...

α

Published on arXiv

2604.09024

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Successfully induces refusal responses across 6 MLLMs when analyzing protected images, with countermeasures showing degraded accuracy/efficiency trade-offs

ImageProtector

Novel technique introduced


Multi-modal large language models (MLLMs) have emerged as powerful tools for analyzing Internet-scale image data, offering significant benefits but also raising critical safety and societal concerns. In particular, open-weight MLLMs may be misused to extract sensitive information from personal images at scale, such as identities, locations, or other private details. In this work, we propose ImageProtector, a user-side method that proactively protects images before sharing by embedding a carefully crafted, nearly imperceptible perturbation that acts as a visual prompt injection attack on MLLMs. As a result, when an adversary analyzes a protected image with an MLLM, the MLLM is consistently induced to generate a refusal response such as "I'm sorry, I can't help with that request." We empirically demonstrate the effectiveness of ImageProtector across six MLLMs and four datasets. Additionally, we evaluate three potential countermeasures, Gaussian noise, DiffPure, and adversarial training, and show that while they partially mitigate the impact of ImageProtector, they simultaneously degrade model accuracy and/or efficiency. Our study focuses on the practically important setting of open-weight MLLMs and large-scale automated image analysis, and highlights both the promise and the limitations of perturbation-based privacy protection.


Key Contributions

  • ImageProtector method that embeds imperceptible adversarial perturbations in images to induce MLLM refusal responses
  • Evaluation across 6 MLLMs and 4 datasets showing consistent privacy protection
  • Analysis of three countermeasures (Gaussian noise, DiffPure, adversarial training) showing trade-offs between mitigation and model performance

🛡️ Threat Analysis

Input Manipulation Attack

Creates adversarial visual perturbations that manipulate MLLM behavior at inference time, causing misclassification/refusal responses.


Details

Domains
visionmultimodal
Model Types
vlmmultimodaltransformer
Threat Tags
inference_timedigitalblack_box
Datasets
Four datasets (specific names not in abstract)
Applications
privacy protectionpersonal image analysisvision-language models