attack 2025

VisualDAN: Exposing Vulnerabilities in VLMs with Visual-Driven DAN Commands

Aofan Liu 1,2, Lulu Tang 1

0 citations · 48 references · arXiv

α

Published on arXiv

2510.09699

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

A single adversarially optimized image with embedded DAN commands successfully jailbreaks safety-aligned VLMs (MiniGPT-4, MiniGPT-v2, InstructBLIP, LLaVA), forcing them to produce harmful outputs across a broad range of malicious instructions.

VisualDAN

Novel technique introduced


Vision-Language Models (VLMs) have garnered significant attention for their remarkable ability to interpret and generate multimodal content. However, securing these models against jailbreak attacks continues to be a substantial challenge. Unlike text-only models, VLMs integrate additional modalities, introducing novel vulnerabilities such as image hijacking, which can manipulate the model into producing inappropriate or harmful responses. Drawing inspiration from text-based jailbreaks like the "Do Anything Now" (DAN) command, this work introduces VisualDAN, a single adversarial image embedded with DAN-style commands. Specifically, we prepend harmful corpora with affirmative prefixes (e.g., "Sure, I can provide the guidance you need") to trick the model into responding positively to malicious queries. The adversarial image is then trained on these DAN-inspired harmful texts and transformed into the text domain to elicit malicious outputs. Extensive experiments on models such as MiniGPT-4, MiniGPT-v2, InstructBLIP, and LLaVA reveal that VisualDAN effectively bypasses the safeguards of aligned VLMs, forcing them to execute a broad range of harmful instructions that severely violate ethical standards. Our results further demonstrate that even a small amount of toxic content can significantly amplify harmful outputs once the model's defenses are compromised. These findings highlight the urgent need for robust defenses against image-based attacks and offer critical insights for future research into the alignment and security of VLMs.


Key Contributions

  • VisualDAN: a single adversarial image optimized with DAN-style affirmative prefixes (e.g., 'Sure, I can provide…') that transfers jailbreak behavior across diverse VLMs
  • Demonstrates that image hijacking via visual adversarial inputs can broadly bypass ethical safeguards in aligned VLMs including MiniGPT-4, MiniGPT-v2, InstructBLIP, and LLaVA
  • Shows that even minimal toxic content injected via the visual channel can substantially amplify harmful outputs once a VLM's defenses are compromised

🛡️ Threat Analysis

Input Manipulation Attack

VisualDAN crafts a single adversarial image via gradient-based optimization (trained on harmful texts) to manipulate VLM outputs — a direct adversarial visual input attack at inference time.


Details

Domains
visionnlpmultimodal
Model Types
vlmmultimodal
Threat Tags
white_boxinference_timetargeteddigital
Applications
vision-language modelsmultimodal ai assistants