defense 2026

AM$^3$Safety: Towards Data Efficient Alignment of Multi-modal Multi-turn Safety for MLLMs

Han Zhu 1, Jiale Chen 2, Chengkun Cai 3, Shengjie Sun 4, Haoran Li 1, Yujin Zhou 1, Chi-Min Chan 1, Pengcheng Wen 1, Lei Li 5, Sirui Han 1, Yike Guo 1

0 citations · 38 references · arXiv

α

Published on arXiv

2601.04736

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

AM³Safety achieves >10% decrease in Attack Success Rate and >13% improvement in helpful dimension on multi-turn multi-modal safety benchmarks while preserving general model abilities.

AM³Safety

Novel technique introduced


Multi-modal Large Language Models (MLLMs) are increasingly deployed in interactive applications. However, their safety vulnerabilities become pronounced in multi-turn multi-modal scenarios, where harmful intent can be gradually reconstructed across turns, and security protocols fade into oblivion as the conversation progresses. Existing Reinforcement Learning from Human Feedback (RLHF) alignment methods are largely developed for single-turn visual question-answer (VQA) task and often require costly manual preference annotations, limiting their effectiveness and scalability in dialogues. To address this challenge, we present InterSafe-V, an open-source multi-modal dialogue dataset containing 11,270 dialogues and 500 specially designed refusal VQA samples. This dataset, constructed through interaction between several models, is designed to more accurately reflect real-world scenarios and includes specialized VQA pairs tailored for specific domains. Building on this dataset, we propose AM$^3$Safety, a framework that combines a cold-start refusal phase with Group Relative Policy Optimization (GRPO) fine-tuning using turn-aware dual-objective rewards across entire dialogues. Experiments on Qwen2.5-VL-7B-Instruct and LLaVA-NeXT-7B show more than 10\% decrease in Attack Success Rate (ASR) together with an increment of at least 8\% in harmless dimension and over 13\% in helpful dimension of MLLMs on multi-modal multi-turn safety benchmarks, while preserving their general abilities.


Key Contributions

  • InterSafe-V: open-source multi-modal dialogue safety dataset with 11,270 dialogues and 500 refusal VQA pairs constructed via model-to-model interaction, eliminating manual annotation
  • AM³Safety framework combining cold-start refusal learning with GRPO fine-tuning and a turn-aware dual-objective reward that balances safety and helpfulness across full dialogue histories
  • Demonstrated >10% ASR reduction and >8%/13% improvements in harmless/helpful dimensions on Qwen2.5-VL-7B-Instruct and LLaVA-NeXT-7B on multi-turn multi-modal safety benchmarks

🛡️ Threat Analysis


Details

Domains
multimodalnlp
Model Types
vlmllm
Threat Tags
inference_timeblack_box
Datasets
InterSafe-VQwen2.5-VL-7B-Instruct benchmarksLLaVA-NeXT-7B benchmarks
Applications
multi-turn multi-modal dialogue systemsvisual question answering