defense 2025

COSMO-RL: Towards Trustworthy LMRMs via Joint Safety and Stability

Yizhuo Ding 1,2, Mingkang Chen 2,3, Qiuhua Liu 2,4, Fenghua Weng 2,5, Wanying Qu 1,2, Yue Yang 2, Yugang Jiang 1, Zuxuan Wu 1, Yanwei Fu 1, Wenqi Shao 2

0 citations · 27 references · arXiv

α

Published on arXiv

2510.04196

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

COSMO-R1 improves safety and multimodal jailbreak robustness while maintaining or improving reasoning and instruction following, with consistent gains across different backbone architectures

COSMO-RL

Novel technique introduced


Large Multimodal Reasoning Models (LMRMs) are moving into real applications, where they must be both useful and safe. Safety is especially challenging in multimodal settings: images and text can be combined to bypass guardrails, and single objective training can cause policy drift that yields over-refusal on benign inputs or unsafe compliance on risky ones. We present COSMO-RL, a mixed reinforcement learning framework that trains reasoning oriented LMRMs under multimodal, multitask, and multiobjective signals, and we release the resulting model, COSMO-R1. Our approach aims to let safety and capability grow together in one stable pipeline rather than competing during alignment. In experiments, COSMO-R1 improves safety while maintaining-and often improving multimodal reasoning and instruction following, shows stronger robustness to multimodal jailbreaks, and reduces unnecessary refusals. The framework also transfers across backbones with consistent gains. Ablations support the design choices, indicating a simple path to advancing safety and general capability together in LMRMs.


Key Contributions

  • COSMO-RL: a mixed RL framework training LMRMs under multimodal, multitask, and multiobjective reward signals to jointly optimize safety and capability
  • Demonstrates that safety and reasoning capability can improve together without policy drift causing over-refusal or unsafe compliance
  • COSMO-R1 model achieving stronger robustness to multimodal jailbreaks with gains transferring across model backbones

🛡️ Threat Analysis


Details

Domains
multimodalnlpvisionreinforcement-learning
Model Types
llmvlmmultimodal
Threat Tags
inference_timetraining_time
Applications
large multimodal reasoning modelssafety alignmentmultimodal jailbreak defense