benchmark arXiv Dec 2, 2025 · Dec 2025
Ziyi Tong, Feifei Sun, Le Minh Nguyen · Japan Advanced Institute of Science and Technology
Evaluates text-based membership inference attacks on multimodal LLMs, finding visual inputs mask MIA signals in out-of-distribution settings
Membership Inference Attack multimodalvisionnlp
Large Multimodal Language Models (MLLMs) are emerging as one of the foundational tools in an expanding range of applications. Consequently, understanding training-data leakage in these systems is increasingly critical. Log-probability-based membership inference attacks (MIAs) have become a widely adopted approach for assessing data exposure in large language models (LLMs), yet their effect in MLLMs remains unclear. We present the first comprehensive evaluation of extending these text-based MIA methods to multimodal settings. Our experiments under vision-and-text (V+T) and text-only (T-only) conditions across the DeepSeek-VL and InternVL model families show that in in-distribution settings, logit-based MIAs perform comparably across configurations, with a slight V+T advantage. Conversely, in out-of-distribution settings, visual inputs act as regularizers, effectively masking membership signals.
vlm llm transformer Japan Advanced Institute of Science and Technology