attack 2025

Metaphor-based Jailbreaking Attacks on Text-to-Image Models

Chenyu Zhang 1, Yiwen Ma 1, Lanjun Wang 1, Wenhui Li 1, Yi Tu 1,2, An-An Liu 1

1 citations · 68 references · arXiv

α

Published on arXiv

2512.10766

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

MJA outperforms six baseline jailbreaking methods against diverse T2I defense mechanisms (both external and internal) while using fewer queries

MJA (Metaphor-based Jailbreaking Attack)

Novel technique introduced


Text-to-image~(T2I) models commonly incorporate defense mechanisms to prevent the generation of sensitive images. Unfortunately, recent jailbreaking attacks have shown that adversarial prompts can effectively bypass these mechanisms and induce T2I models to produce sensitive content, revealing critical safety vulnerabilities. However, existing attack methods implicitly assume that the attacker knows the type of deployed defenses, which limits their effectiveness against unknown or diverse defense mechanisms. In this work, we introduce \textbf{MJA}, a \textbf{m}etaphor-based \textbf{j}ailbreaking \textbf{a}ttack method inspired by the Taboo game, aiming to effectively and efficiently attack diverse defense mechanisms without prior knowledge of their type by generating metaphor-based adversarial prompts. Specifically, MJA consists of two modules: an LLM-based multi-agent generation module~(MLAG) and an adversarial prompt optimization module~(APO). MLAG decomposes the generation of metaphor-based adversarial prompts into three subtasks: metaphor retrieval, context matching, and adversarial prompt generation. Subsequently, MLAG coordinates three LLM-based agents to generate diverse adversarial prompts by exploring various metaphors and contexts. To enhance attack efficiency, APO first trains a surrogate model to predict the attack results of adversarial prompts and then designs an acquisition strategy to adaptively identify optimal adversarial prompts. Extensive experiments on T2I models with various external and internal defense mechanisms demonstrate that MJA outperforms six baseline methods, achieving stronger attack performance while using fewer queries. Code is available in https://github.com/datar001/metaphor-based-jailbreaking-attack.


Key Contributions

  • MJA: a metaphor-based jailbreaking attack for T2I models that requires no prior knowledge of the deployed defense mechanism type
  • MLAG: an LLM multi-agent module that decomposes adversarial prompt generation into metaphor retrieval, context matching, and prompt generation subtasks
  • APO: an adversarial prompt optimization module using a surrogate model and acquisition strategy to reduce query count while maximizing attack success

🛡️ Threat Analysis


Details

Domains
visionnlpmultimodalgenerative
Model Types
diffusionllmmultimodal
Threat Tags
black_boxinference_time
Applications
text-to-image generationcontent safety bypassgenerative model safety evaluation