attack 2025

Do Vision-Language Models Leak What They Learn? Adaptive Token-Weighted Model Inversion Attacks

Ngoc-Bao Nguyen 1, Sy-Tuyen Ho 1,2, Koh Jun Hao 1, Ngai-Man Cheung 1

0 citations

α

Published on arXiv

2508.04097

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Human evaluation of images reconstructed from VLMs yields 61.21% attack accuracy, demonstrating severe privacy risks from training data leakage in publicly released VLMs.

SMI-AW (Sequence-based Model Inversion with Adaptive Token Weighting)

Novel technique introduced


Model inversion (MI) attacks pose significant privacy risks by reconstructing private training data from trained neural networks. While prior studies have primarily examined unimodal deep networks, the vulnerability of vision-language models (VLMs) remains largely unexplored. In this work, we present the first systematic study of MI attacks on VLMs to understand their susceptibility to leaking private visual training data. Our work makes two main contributions. First, tailored to the token-generative nature of VLMs, we introduce a suite of token-based and sequence-based model inversion strategies, providing a comprehensive analysis of VLMs' vulnerability under different attack formulations. Second, based on the observation that tokens vary in their visual grounding, and hence their gradients differ in informativeness for image reconstruction, we propose Sequence-based Model Inversion with Adaptive Token Weighting (SMI-AW) as a novel MI for VLMs. SMI-AW dynamically reweights each token's loss gradient according to its visual grounding, enabling the optimization to focus on visually informative tokens and more effectively guide the reconstruction of private images. Through extensive experiments and human evaluations on a range of state-of-the-art VLMs across multiple datasets, we show that VLMs are susceptible to training data leakage. Human evaluation of the reconstructed images yields an attack accuracy of 61.21%, underscoring the severity of these privacy risks. Notably, we demonstrate that publicly released VLMs are vulnerable to such attacks. Our study highlights the urgent need for privacy safeguards as VLMs become increasingly deployed in sensitive domains such as healthcare and finance. Additional experiments are provided in Supp.


Key Contributions

  • First systematic study of model inversion attacks on VLMs, introducing a suite of token-based and sequence-based MI strategies tailored to the token-generative nature of VLMs.
  • SMI-AW: Sequence-based Model Inversion with Adaptive Token Weighting, which dynamically reweights each token's loss gradient according to its visual grounding to focus optimization on visually informative tokens.
  • Empirical demonstration that publicly released VLMs are vulnerable to training data reconstruction, achieving 61.21% human-evaluated attack accuracy across multiple state-of-the-art VLMs.

🛡️ Threat Analysis

Model Inversion Attack

The paper's primary contribution is a novel model inversion attack (SMI-AW) that reconstructs private visual training data from VLMs by exploiting gradient information, with an explicit adversary threat model targeting data reconstruction.


Details

Domains
visionmultimodal
Model Types
vlmmultimodal
Threat Tags
white_boxtraining_timetargeteddigital
Applications
vision-language modelsimage classificationhealthcare imagingfinancial data