Towards Reasoning-Preserving Unlearning in Multimodal Large Language Models
Hongji Li 1, Junchi yao 1, Manjiang Yu 2, Priyanka Singh 2, Xue Li 2, Di Wang 3, Lijie Hu 1
Published on arXiv
2512.17911
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
R-MUSE achieves substantially better balance between suppressing reasoning-level sensitive-information leakage and preserving general reasoning competence compared to existing MLLM and LRM unlearning methods on RMLLMU-Bench.
R-MUSE
Novel technique introduced
Machine unlearning aims to erase requested data from trained models without full retraining. For Reasoning Multimodal Large Language Models (RMLLMs), this is uniquely challenging: intermediate chain-of-thought steps can still leak sensitive information even when final answers are forgotten, and overly aggressive interventions easily damage general reasoning ability. Yet no benchmark jointly evaluates how well unlearning methods suppress reasoning-level leakage while preserving reasoning competence. We address this gap with RMLLMU-Bench, the first benchmark for RMLLM unlearning that extends standard forgetting metrics with dedicated measures of reasoning leakage and reasoning retention. A systematic evaluation on RMLLMU-Bench reveals that existing unlearning methods for MLLMs and Large (Language) Reasoning Models (LRMs) either leave substantial leakage in the reasoning process or severely degrade reasoning performance. To address these gaps, we propose R-MUSE (Reasoning-preserving MLLM Unlearning via Subspace guidance and Adaptive Steering), a training-free and inference-time intervention framework that steers internal representations to forget both answers and reasoning traces while explicitly preserving general reasoning. Experiments on RMLLMU-Bench demonstrate that R-MUSE achieves a substantially better balance between effective forgetting and reasoning retention.
Key Contributions
- RMLLMU-Bench: the first benchmark for reasoning MLLM unlearning, extending standard forgetting metrics with reasoning-leakage and reasoning-retention measures
- R-MUSE: a training-free, inference-time activation-steering framework that suppresses sensitive information in both final answers and chain-of-thought steps while preserving general reasoning via subspace projection and adaptive steering strength
- Empirical demonstration that existing MLLM unlearning methods leave substantial reasoning leakage, while LRM-style methods severely degrade reasoning ability when applied to multimodal models