Defense-to-Attack: Bypassing Weak Defenses Enables Stronger Jailbreaks in Vision-Language Models
Yunhan Zhao 1, Xiang Zheng 2, Xingjun Ma 1
Published on arXiv
2509.12724
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Defense2Attack achieves approximately 80% attack success rate on open-source VLMs and 50% on commercial VLMs in a single-shot attempt, outperforming state-of-the-art methods that require multiple tries
Defense2Attack
Novel technique introduced
Despite their superb capabilities, Vision-Language Models (VLMs) have been shown to be vulnerable to jailbreak attacks. While recent jailbreaks have achieved notable progress, their effectiveness and efficiency can still be improved. In this work, we reveal an interesting phenomenon: incorporating weak defense into the attack pipeline can significantly enhance both the effectiveness and the efficiency of jailbreaks on VLMs. Building on this insight, we propose Defense2Attack, a novel jailbreak method that bypasses the safety guardrails of VLMs by leveraging defensive patterns to guide jailbreak prompt design. Specifically, Defense2Attack consists of three key components: (1) a visual optimizer that embeds universal adversarial perturbations with affirmative and encouraging semantics; (2) a textual optimizer that refines the input using a defense-styled prompt; and (3) a red-team suffix generator that enhances the jailbreak through reinforcement fine-tuning. We empirically evaluate our method on four VLMs and four safety benchmarks. The results demonstrate that Defense2Attack achieves superior jailbreak performance in a single attempt, outperforming state-of-the-art attack methods that often require multiple tries. Our work offers a new perspective on jailbreaking VLMs.
Key Contributions
- Discovers that integrating weak defenses into adversarial prompt pipelines significantly boosts jailbreak effectiveness and efficiency on VLMs
- Proposes Defense2Attack, a bimodal jailbreak combining a visual adversarial perturbation optimizer, a defense-styled textual optimizer, and an RL fine-tuned red-team suffix generator
- Achieves ~80% attack success rate on open-source VLMs and ~50% on commercial VLMs in a single attempt, outperforming prior multi-query methods
🛡️ Threat Analysis
Visual optimizer embeds universal adversarial perturbations (gradient-based) into images specifically to manipulate VLM outputs — classic adversarial input manipulation at inference time targeting a multimodal model.