attack 2026

VII: Visual Instruction Injection for Jailbreaking Image-to-Video Generation Models

Bowen Zheng 1, Yongli Xiang 2, Ziming Hong 2, Zerong Lin 1, Chaojian Yu 1, Tongliang Liu 2, Xinge You 1

3 citations · 62 references · arXiv (Cornell University)

α

Published on arXiv

2602.20999

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves up to 83.5% attack success rate with near-zero refusal rates on four state-of-the-art commercial I2V models, outperforming all existing jailbreak baselines.

VII (Visual Instruction Injection)

Novel technique introduced


Image-to-Video (I2V) generation models, which condition video generation on reference images, have shown emerging visual instruction-following capability, allowing certain visual cues in reference images to act as implicit control signals for video generation. However, this capability also introduces a previously overlooked risk: adversaries may exploit visual instructions to inject malicious intent through the image modality. In this work, we uncover this risk by proposing Visual Instruction Injection (VII), a training-free and transferable jailbreaking framework that intentionally disguises the malicious intent of unsafe text prompts as benign visual instructions in the safe reference image. Specifically, VII coordinates a Malicious Intent Reprogramming module to distill malicious intent from unsafe text prompts while minimizing their static harmfulness, and a Visual Instruction Grounding module to ground the distilled intent onto a safe input image by rendering visual instructions that preserve semantic consistency with the original unsafe text prompt, thereby inducing harmful content during I2V generation. Empirically, our extensive experiments on four state-of-the-art commercial I2V models (Kling-v2.5-turbo, Gemini Veo-3.1, Seedance-1.5-pro, and PixVerse-V5) demonstrate that VII achieves Attack Success Rates of up to 83.5% while reducing Refusal Rates to near zero, significantly outperforming existing baselines.


Key Contributions

  • Identifies and exploits a novel attack surface in I2V models: the visual instruction-following capability that allows reference images to serve as implicit control signals for video generation.
  • Proposes VII, a training-free jailbreak framework combining Malicious Intent Reprogramming (MIR, which distills malicious intent from unsafe text into benign synonyms) and Visual Instruction Grounding (VIG, which renders the distilled intent as typographic/symbolic overlays on safe images).
  • Achieves up to 83.5% attack success rate and near-zero refusal rates across four commercial I2V models (Kling-v2.5-turbo, Gemini Veo-3.1, Seedance-1.5-pro, PixVerse-V5), significantly outperforming baselines.

🛡️ Threat Analysis

Input Manipulation Attack

Crafts adversarial visual inputs (images with typographically rendered malicious instructions and symbols) to manipulate I2V model outputs at inference time — the dual-tag guideline explicitly covers adversarial visual inputs to VLM-based models that jailbreak or manipulate their outputs.


Details

Domains
multimodalgenerativevision
Model Types
vlmdiffusionmultimodal
Threat Tags
black_boxinference_timetargeteddigital
Datasets
COCO-I2VSafetyBenchConceptRisk
Applications
image-to-video generationvideo generationcommercial generative ai apis