attack 2026

Curation Leaks: Membership Inference Attacks against Data Curation for Machine Learning

Dariush Wahdany 1, Matthew Jagielski 2, Adam Dziedzic 1, Franziska Boenisch 1

0 citations

α

Published on arXiv

2603.00811

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

Each stage of the data curation pipeline leaks membership information about the private guidance dataset, including final models trained solely on public curated data, with differentially private curation effectively mitigating leakage.

Curation Leaks

Novel technique introduced


In machine learning, curation is used to select the most valuable data for improving both model accuracy and computational efficiency. Recently, curation has also been explored as a solution for private machine learning: rather than training directly on sensitive data, which is known to leak information through model predictions, the private data is used only to guide the selection of useful public data. The resulting model is then trained solely on curated public data. It is tempting to assume that such a model is privacy-preserving because it has never seen the private data. Yet, we show that without further protection, curation pipelines can still leak private information. Specifically, we introduce novel attacks against popular curation methods, targeting every major step: the computation of curation scores, the selection of the curated subset, and the final trained model. We demonstrate that each stage reveals information about the private dataset and that even models trained exclusively on curated public data leak membership information about the private data that guided curation. These findings highlight the previously overlooked inherent privacy risks of data curation and show that privacy assessment must extend beyond the training procedure to include the data selection process. Our differentially private adaptations of curation methods effectively mitigate leakage, indicating that formal privacy guarantees for curation are a promising direction.


Key Contributions

  • Introduces the first systematic membership inference attacks against data curation pipelines, targeting curation score computation, subset selection, and the final trained model
  • Demonstrates that models trained exclusively on curated public data still leak membership information about the private data that guided curation
  • Proposes differentially private adaptations of curation methods as effective mitigations against curation leakage

🛡️ Threat Analysis

Membership Inference Attack

The paper's primary contribution is novel membership inference attacks targeting every stage of data curation: curation score computation, subset selection, and the final trained model — all aimed at determining whether specific private samples guided the curation process.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
black_boxgrey_boxtraining_timeinference_time
Applications
data curation for machine learningprivacy-preserving training pipelines