defense 2025

DP-GENG : Differentially Private Dataset Distillation Guided by DP-Generated Data

Shuo Shi 1, Jinghuai Zhang 2, Shijie Jiang 3, Chunyi Zhou 1, Yuyuan Li 4, Mengying Zhu 1, Yangyang Wu 1, Tianyu Du 1

0 citations · arXiv

α

Published on arXiv

2511.09876

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

DP-GENG significantly outperforms existing DP-dataset distillation methods in both downstream model utility and robustness against membership inference attacks under the same privacy budget.

DP-GENG

Novel technique introduced


Dataset distillation (DD) compresses large datasets into smaller ones while preserving the performance of models trained on them. Although DD is often assumed to enhance data privacy by aggregating over individual examples, recent studies reveal that standard DD can still leak sensitive information from the original dataset due to the lack of formal privacy guarantees. Existing differentially private (DP)-DD methods attempt to mitigate this risk by injecting noise into the distillation process. However, they often fail to fully leverage the original dataset, resulting in degraded realism and utility. This paper introduces \libn, a novel framework that addresses the key limitations of current DP-DD by leveraging DP-generated data. Specifically, \lib initializes the distilled dataset with DP-generated data to enhance realism. Then, generated data refines the DP-feature matching technique to distill the original dataset under a small privacy budget, and trains an expert model to align the distilled examples with their class distribution. Furthermore, we design a privacy budget allocation strategy to determine budget consumption across DP components and provide a theoretical analysis of the overall privacy guarantees. Extensive experiments show that \lib significantly outperforms state-of-the-art DP-DD methods in terms of both dataset utility and robustness against membership inference attacks, establishing a new paradigm for privacy-preserving dataset distillation.


Key Contributions

  • DP-GENG framework that initializes distilled datasets with DP-generated data to improve realism while preserving formal differential privacy guarantees
  • DP-feature matching technique guided by DP-generated data that distills original datasets under a small privacy budget with an expert model for class distribution alignment
  • Privacy budget allocation strategy across DP components with theoretical analysis; empirical demonstration of superior utility and MIA robustness over state-of-the-art DP-DD methods

🛡️ Threat Analysis

Membership Inference Attack

The paper explicitly evaluates robustness against membership inference attacks as a primary metric, and the DP-GENG framework is motivated by and measured against the threat of an adversary determining whether specific data points were used in training via dataset distillation.


Details

Domains
vision
Model Types
diffusioncnngenerative
Threat Tags
training_timeblack_box
Datasets
CIFAR-10CIFAR-100TinyImageNet
Applications
dataset distillationprivacy-preserving machine learning