attack arXiv Mar 1, 2026 · 5w ago
Huajie Chen, Tianqing Zhu, Yuchen Zhong et al. · City University of Macau · CISPA Helmholtz Center for Information Security +2 more
Reveals that dataset distillation leaks training data via three-stage attack: architecture inference, membership inference, and model inversion
Model Inversion Attack Membership Inference Attack vision
Dataset distillation compresses a large real dataset into a small synthetic one, enabling models trained on the synthetic data to achieve performance comparable to those trained on the real data. Although synthetic datasets are assumed to be privacy-preserving, we show that existing distillation methods can cause severe privacy leakage because synthetic datasets implicitly encode the weight trajectories of the distilled model, they become over-informative and exploitable by adversaries. To expose this risk, we introduce the Information Revelation Attack (IRA) against state-of-the-art distillation techniques. Experiments show that IRA accurately predicts both the distillation algorithm and model architecture, and can successfully infer membership and recover sensitive samples from the real dataset.
cnn diffusion City University of Macau · CISPA Helmholtz Center for Information Security · University of Technology Sydney +1 more