attack 2025

LeakyCLIP: Extracting Training Data from CLIP

Yunhao Chen , Shujie Wang , Xin Wang , Xingjun Ma

0 citations

α

Published on arXiv

2508.00756

Model Inversion Attack

OWASP ML Top 10 — ML03

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

LeakyCLIP achieves over 258% improvement in SSIM over baselines for ViT-B-16 on LAION-2B, and membership inference succeeds even from low-fidelity reconstructions.

LeakyCLIP

Novel technique introduced


Understanding the memorization and privacy leakage risks in Contrastive Language--Image Pretraining (CLIP) is critical for ensuring the security of multimodal models. Recent studies have demonstrated the feasibility of extracting sensitive training examples from diffusion models, with conditional diffusion models exhibiting a stronger tendency to memorize and leak information. In this work, we investigate data memorization and extraction risks in CLIP through the lens of CLIP inversion, a process that aims to reconstruct training images from text prompts. To this end, we introduce \textbf{LeakyCLIP}, a novel attack framework designed to achieve high-quality, semantically accurate image reconstruction from CLIP embeddings. We identify three key challenges in CLIP inversion: 1) non-robust features, 2) limited visual semantics in text embeddings, and 3) low reconstruction fidelity. To address these challenges, LeakyCLIP employs 1) adversarial fine-tuning to enhance optimization smoothness, 2) linear transformation-based embedding alignment, and 3) Stable Diffusion-based refinement to improve fidelity. Empirical results demonstrate the superiority of LeakyCLIP, achieving over 258% improvement in Structural Similarity Index Measure (SSIM) for ViT-B-16 compared to baseline methods on LAION-2B subset. Furthermore, we uncover a pervasive leakage risk, showing that training data membership can even be successfully inferred from the metrics of low-fidelity reconstructions. Our work introduces a practical method for CLIP inversion while offering novel insights into the nature and scope of privacy risks in multimodal models.


Key Contributions

  • LeakyCLIP attack framework for high-fidelity training image reconstruction from CLIP embeddings via adversarial fine-tuning, linear embedding alignment, and Stable Diffusion refinement
  • Demonstrates 258%+ SSIM improvement over baseline CLIP inversion methods on LAION-2B
  • Shows membership inference is achievable even from low-fidelity reconstruction metrics, revealing a broader privacy leakage surface in CLIP

🛡️ Threat Analysis

Model Inversion Attack

LeakyCLIP is a model inversion attack: an adversary reconstructs private training images from CLIP embeddings using text prompts, adversarial fine-tuning, and Stable Diffusion refinement — directly recovering data the model was trained on.

Membership Inference Attack

The paper explicitly demonstrates that training data membership can be inferred from low-fidelity reconstruction metrics, making membership inference a direct secondary contribution alongside the inversion attack.


Details

Domains
visionmultimodal
Model Types
multimodaldiffusiontransformer
Threat Tags
white_boxinference_timetargeteddigital
Datasets
LAION-2B
Applications
multimodal contrastive learningimage-text representation modelsclip-based systems