Crafting Adversarial Inputs for Large Vision-Language Models Using Black-Box Optimization
Jiwei Guan 1, Haibo Jin 2, Haohan Wang 2
Published on arXiv
2601.01747
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
ZO-SPSA achieves 83.0% jailbreak success rate on InstructBLIP and 64.18% adversarial transferability from MiniGPT-4 under fully black-box conditions with no gradient access.
ZO-SPSA
Novel technique introduced
Recent advancements in Large Vision-Language Models (LVLMs) have shown groundbreaking capabilities across diverse multimodal tasks. However, these models remain vulnerable to adversarial jailbreak attacks, where adversaries craft subtle perturbations to bypass safety mechanisms and trigger harmful outputs. Existing white-box attacks methods require full model accessibility, suffer from computing costs and exhibit insufficient adversarial transferability, making them impractical for real-world, black-box settings. To address these limitations, we propose a black-box jailbreak attack on LVLMs via Zeroth-Order optimization using Simultaneous Perturbation Stochastic Approximation (ZO-SPSA). ZO-SPSA provides three key advantages: (i) gradient-free approximation by input-output interactions without requiring model knowledge, (ii) model-agnostic optimization without the surrogate model and (iii) lower resource requirements with reduced GPU memory consumption. We evaluate ZO-SPSA on three LVLMs, including InstructBLIP, LLaVA and MiniGPT-4, achieving the highest jailbreak success rate of 83.0% on InstructBLIP, while maintaining imperceptible perturbations comparable to white-box methods. Moreover, adversarial examples generated from MiniGPT-4 exhibit strong transferability to other LVLMs, with ASR reaching 64.18%. These findings underscore the real-world feasibility of black-box jailbreaks and expose critical weaknesses in the safety mechanisms of current LVLMs
Key Contributions
- ZO-SPSA: a gradient-free, black-box jailbreak attack for LVLMs using Simultaneous Perturbation Stochastic Approximation that requires no model internals or surrogate model
- Achieves 83.0% ASR on InstructBLIP with imperceptible perturbations comparable to white-box methods, while consuming significantly less GPU memory
- Demonstrates strong adversarial transferability (64.18% ASR) from MiniGPT-4 to other LVLMs, unlike prior white-box approaches that fail to transfer
🛡️ Threat Analysis
ZO-SPSA crafts adversarial visual perturbations on images using zeroth-order optimization to cause LVLMs to produce harmful outputs at inference time — a direct input manipulation attack on the visual modality.