MIDST Challenge at SaTML 2025: Membership Inference over Diffusion-models-based Synthetic Tabular data
Masoumeh Shafieinejad 1, Xi He 1,2, Mahshid Alinoori 1, John Jewell 1, Sana Ayromlou 3, Wei Pang 2, Veronica Chatrath 4, Garui Sharma 5, Deval Pandya 1
Published on arXiv
2603.19185
Membership Inference Attack
OWASP ML Top 10 — ML04
Key Finding
Competition framework enabling quantitative evaluation of privacy guarantees in diffusion-based synthetic tabular data generators through MIA resistance testing
MIDST Challenge
Novel technique introduced
Synthetic data is often perceived as a silver-bullet solution to data anonymization and privacy-preserving data publishing. Drawn from generative models like diffusion models, synthetic data is expected to preserve the statistical properties of the original dataset while remaining resilient to privacy attacks. Recent developments of diffusion models have been effective on a wide range of data types, but their privacy resilience, particularly for tabular formats, remains largely unexplored. MIDST challenge sought a quantitative evaluation of the privacy gain of synthetic tabular data generated by diffusion models, with a specific focus on its resistance to membership inference attacks (MIAs). Given the heterogeneity and complexity of tabular data, multiple target models were explored for MIAs, including diffusion models for single tables of mixed data types and multi-relational tables with interconnected constraints. MIDST inspired the development of novel black-box and white-box MIAs tailored to these target diffusion models as a key outcome, enabling a comprehensive evaluation of their privacy efficacy. The MIDST GitHub repository is available at https://github.com/VectorInstitute/MIDST
Key Contributions
- First competition evaluating MIA on diffusion-generated synthetic tabular data
- Extended MIA evaluation to multi-relational tabular data synthesis
- Inspired development of novel black-box and white-box MIAs for diffusion models on tabular data
🛡️ Threat Analysis
Core focus is membership inference attacks (MIA) on diffusion models — evaluating whether specific data points were in training sets. Paper organizes a competition specifically to test MIA resistance of synthetic data generators.