benchmark 2025

On the MIA Vulnerability Gap Between Private GANs and Diffusion Models

Ilana Sebag 1,2, Jean-Yves Franceschi 1, Alain Rakotomamonjy 1, Alexandre Allauzen 2,3, Jamal Atif 2

0 citations

α

Published on arXiv

2509.03341

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

DP-GANs consistently exhibit significantly lower MIA vulnerability than DP-diffusion models under the same privacy budget ε, revealing a fundamental architecture-driven privacy robustness gap


Generative Adversarial Networks (GANs) and diffusion models have emerged as leading approaches for high-quality image synthesis. While both can be trained under differential privacy (DP) to protect sensitive data, their sensitivity to membership inference attacks (MIAs), a key threat to data confidentiality, remains poorly understood. In this work, we present the first unified theoretical and empirical analysis of the privacy risks faced by differentially private generative models. We begin by showing, through a stability-based analysis, that GANs exhibit fundamentally lower sensitivity to data perturbations than diffusion models, suggesting a structural advantage in resisting MIAs. We then validate this insight with a comprehensive empirical study using a standardized MIA pipeline to evaluate privacy leakage across datasets and privacy budgets. Our results consistently reveal a marked privacy robustness gap in favor of GANs, even in strong DP regimes, highlighting that model type alone can critically shape privacy leakage.


Key Contributions

  • Stability-based theoretical analysis proving DP-GANs have lower sensitivity to data perturbations than DP-diffusion models, formally bounding adversarial MIA advantage
  • First systematic empirical comparison of MIA leakage across DP-GANs and DP-diffusion models under identical privacy budgets using a standardized shadow-model pipeline
  • Demonstrates that the DP budget ε alone does not characterize membership leakage risk — model architecture is a critical and previously overlooked factor

🛡️ Threat Analysis

Membership Inference Attack

The paper's entire focus is membership inference attacks against generative models — it proves theoretical bounds on adversarial MIA advantage and empirically measures leakage using a shadow-model framework, comparing DP-GANs vs DP-diffusion models.


Details

Domains
visiongenerative
Model Types
gandiffusion
Threat Tags
black_boxinference_timetraining_time
Applications
image synthesisprivacy-preserving generative modeling