survey 2026

Rethinking Anonymity Claims in Synthetic Data Generation: A Model-Centric Privacy Attack Perspective

Georgi Ganev 1,2, Emiliano De Cristofaro 3

0 citations · 157 references · arXiv

α

Published on arXiv

2601.22434

Membership Inference Attack

OWASP ML Top 10 — ML04

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Demonstrates that current dataset-level anonymity assessments are insufficient and model-centric evaluation is required; DP provides robust protections against identifiability risks while SBPMs do not


Training generative machine learning models to produce synthetic tabular data has become a popular approach for enhancing privacy in data sharing. As this typically involves processing sensitive personal information, releasing either the trained model or generated synthetic datasets can still pose privacy risks. Yet, recent research, commercial deployments, and privacy regulations like the General Data Protection Regulation (GDPR) largely assess anonymity at the level of an individual dataset. In this paper, we rethink anonymity claims about synthetic data from a model-centric perspective and argue that meaningful assessments must account for the capabilities and properties of the underlying generative model and be grounded in state-of-the-art privacy attacks. This perspective better reflects real-world products and deployments, where trained models are often readily accessible for interaction or querying. We interpret the GDPR's definitions of personal data and anonymization under such access assumptions to identify the types of identifiability risks that must be mitigated and map them to privacy attacks across different threat settings. We then argue that synthetic data techniques alone do not ensure sufficient anonymization. Finally, we compare the two mechanisms most commonly used alongside synthetic data -- Differential Privacy (DP) and Similarity-based Privacy Metrics (SBPMs) -- and argue that while DP can offer robust protections against identifiability risks, SBPMs lack adequate safeguards. Overall, our work connects regulatory notions of identifiability with model-centric privacy attacks, enabling more responsible and trustworthy regulatory assessment of synthetic data systems by researchers, practitioners, and policymakers.


Key Contributions

  • Reinterprets GDPR anonymization risks (singling out, linkability, inferences) in the context of generative models and maps them to specific privacy attacks: differencing attacks, membership inference, and attribute inference respectively
  • Argues that dataset-centric anonymity assessments are insufficient and model-centric evaluation against state-of-the-art privacy attacks is necessary for real-world synthetic data deployments
  • Compares Differential Privacy and Similarity-based Privacy Metrics (SBPMs), concluding DP offers robust protections while SBPMs lack adequate safeguards against the identified attacks

🛡️ Threat Analysis

Model Inversion Attack

Differencing attacks (singling out individual training records) and attribute inference (recovering private attributes from trained models) are both addressed as model-centric data reconstruction threats — the paper explicitly covers adversaries trying to extract individual training data from generative models.

Membership Inference Attack

Membership inference is explicitly mapped as the central attack for 'linkability' risk under GDPR — the paper analyzes MIA threat models against generative models and evaluates DP vs SBPMs as defenses against it.


Details

Domains
tabulargenerative
Model Types
generativegandiffusiontransformertraditional_ml
Threat Tags
black_boxinference_time
Applications
synthetic tabular data generationprivacy-preserving data sharinghealthcare datafinancial data