α

Published on arXiv

2510.24233

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

PRIVET reliably detects memorization and privacy leakage at the sample level across diverse modalities, outperforming existing global metrics by providing both interpretable dataset-level and sample-level assessments.

PRIVET

Novel technique introduced


Deep generative models are often trained on sensitive data, such as genetic sequences, health data, or more broadly, any copyrighted, licensed or protected content. This raises critical concerns around privacy-preserving synthetic data, and more specifically around privacy leakage, an issue closely tied to overfitting. Existing methods almost exclusively rely on global criteria to estimate the risk of privacy failure associated to a model, offering only quantitative non interpretable insights. The absence of rigorous evaluation methods for data privacy at the sample-level may hinder the practical deployment of synthetic data in real-world applications. Using extreme value statistics on nearest-neighbor distances, we propose PRIVET, a generic sample-based, modality-agnostic algorithm that assigns an individual privacy leak score to each synthetic sample. We empirically demonstrate that PRIVET reliably detects instances of memorization and privacy leakage across diverse data modalities, including settings with very high dimensionality, limited sample sizes such as genetic data and even under underfitting regimes. We compare our method to existing approaches under controlled settings and show its advantage in providing both dataset level and sample level assessments through qualitative and quantitative outputs. Additionally, our analysis reveals limitations in existing computer vision embeddings to yield perceptually meaningful distances when identifying near-duplicate samples.


Key Contributions

  • PRIVET: a sample-level, modality-agnostic privacy leak score using extreme value statistics on nearest-neighbor distances in embedding space
  • Empirical validation across diverse modalities (images, genetic data) including high-dimensionality and limited-sample regimes
  • Analysis revealing limitations of existing computer vision embeddings for detecting near-duplicate training samples

🛡️ Threat Analysis

Model Inversion Attack

PRIVET measures the extent to which generative model outputs are near-duplicates of private training samples — directly quantifying training data leakage/memorization. The paper's adversarial framing is that an observer of generated data could identify training samples; PRIVET detects this leakage at the sample level, covering the same threat as model inversion (training data reconstruction from model outputs).


Details

Domains
visiongenerative
Model Types
gandiffusion
Threat Tags
training_timeinference_time
Datasets
genetic/genomic sequencesimage datasets (unspecified, used for CV embedding analysis)
Applications
synthetic data generationgenerative model privacy evaluationgenomic data generation