benchmark 2026

MTMCS-Bench: Evaluating Contextual Safety of Multimodal Large Language Models in Multi-Turn Dialogues

Zheyuan Liu 1, Dongwhi Kim 1, Yixin Wan 2, Xiangchi Yuan 3, Zhaoxuan Tan 1, Fengran Mo 4, Meng Jiang 1

0 citations · 67 references · arXiv

α

Published on arXiv

2601.06757

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Across 15 MLLMs, models persistently trade contextual safety for utility — missing gradual harmful intent or over-refusing benign queries — while five evaluated guardrails only partially mitigate multi-turn contextual risks.

MTMCS-Bench

Novel technique introduced


Multimodal large language models (MLLMs) are increasingly deployed as assistants that interact through text and images, making it crucial to evaluate contextual safety when risk depends on both the visual scene and the evolving dialogue. Existing contextual safety benchmarks are mostly single-turn and often miss how malicious intent can emerge gradually or how the same scene can support both benign and exploitative goals. We introduce the Multi-Turn Multimodal Contextual Safety Benchmark (MTMCS-Bench), a benchmark of realistic images and multi-turn conversations that evaluates contextual safety in MLLMs under two complementary settings, escalation-based risk and context-switch risk. MTMCS-Bench offers paired safe and unsafe dialogues with structured evaluation. It contains over 30 thousand multimodal (image+text) and unimodal (text-only) samples, with metrics that separately measure contextual intent recognition, safety-awareness on unsafe cases, and helpfulness on benign ones. Across eight open-source and seven proprietary MLLMs, we observe persistent trade-offs between contextual safety and utility, with models tending to either miss gradual risks or over-refuse benign dialogues. Finally, we evaluate five current guardrails and find that they mitigate some failures but do not fully resolve multi-turn contextual risks.


Key Contributions

  • MTMCS-Bench: a 30k+ sample multi-turn multimodal contextual safety benchmark covering escalation-based and context-switch risk scenarios across 10 safety categories
  • Paired safe/unsafe dialogue design with multimodal and unimodal variants, enabling isolated measurement of intent recognition, safety-awareness, and benign helpfulness
  • Comprehensive evaluation of 15 open-source and proprietary MLLMs plus 5 guardrail methods, revealing persistent safety-utility trade-offs unresolved by current defenses

🛡️ Threat Analysis


Details

Domains
multimodalnlpvision
Model Types
vlmllmmultimodal
Threat Tags
black_boxinference_time
Datasets
MTMCS-Bench
Applications
multimodal assistantsimage-grounded dialogue systems