benchmark 2025

SafeMT: Multi-turn Safety for Multimodal Language Models

Han Zhu 1, Juntao Dai 1, Jiaming Ji 2, Haoran Li 1, Chengkun Cai 3, Pengcheng Wen 1, Chi-Min Chan 1, Boyuan Chen 2, Yaodong Yang 2, Sirui Han 1, Yike Guo 1

3 citations · 41 references · arXiv

α

Published on arXiv

2510.12133

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Attack success rate on MLLMs increases monotonically with the number of dialogue turns, exposing that current safety mechanisms are inadequate for multi-turn conversational interactions.

SafeMT / Safety Index (SI) / Dialogue Safety Moderator

Novel technique introduced


With the widespread use of multi-modal Large Language models (MLLMs), safety issues have become a growing concern. Multi-turn dialogues, which are more common in everyday interactions, pose a greater risk than single prompts; however, existing benchmarks do not adequately consider this situation. To encourage the community to focus on the safety issues of these models in multi-turn dialogues, we introduce SafeMT, a benchmark that features dialogues of varying lengths generated from harmful queries accompanied by images. This benchmark consists of 10,000 samples in total, encompassing 17 different scenarios and four jailbreak methods. Additionally, we propose Safety Index (SI) to evaluate the general safety of MLLMs during conversations. We assess the safety of 17 models using this benchmark and discover that the risk of successful attacks on these models increases as the number of turns in harmful dialogues rises. This observation indicates that the safety mechanisms of these models are inadequate for recognizing the hazard in dialogue interactions. We propose a dialogue safety moderator capable of detecting malicious intent concealed within conversations and providing MLLMs with relevant safety policies. Experimental results from several open-source models indicate that this moderator is more effective in reducing multi-turn ASR compared to existed guard models.


Key Contributions

  • SafeMT: first multi-turn multimodal jailbreak benchmark with 10,000 samples spanning 17 harmful scenarios and 4 jailbreak methods, pairing harmful queries with images
  • Safety Index (SI): a novel metric that accounts for dialogue length and attack stability, providing a more nuanced safety evaluation than flat ASR
  • Dialogue Safety Moderator: a plug-and-play defense trained on multi-turn dialogues that outperforms existing guard models at reducing multi-turn ASR across open-source MLLMs

🛡️ Threat Analysis


Details

Domains
multimodalnlp
Model Types
vlmllm
Threat Tags
inference_timeblack_boxtargeted
Datasets
SafeMT (10,000 samples, self-constructed)MM-SafetyBenchPKU-SafeRLHF-V
Applications
multimodal chatbotsvision-language models