How Worrying Are Privacy Attacks Against Machine Learning?
Published on arXiv
2511.10516
Membership Inference Attack
OWASP ML Top 10 — ML04
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
Most privacy attacks against ML models appear substantially less effective in real-world conditions than a prima facie reading of the academic literature would suggest, casting doubt on overly strict regulatory responses such as mandatory differential privacy.
In several jurisdictions, the regulatory framework on the release and sharing of personal data is being extended to machine learning (ML). The implicit assumption is that disclosing a trained ML model entails a privacy risk for any personal data used in training comparable to directly releasing those data. However, given a trained model, it is necessary to mount a privacy attack to make inferences on the training data. In this concept paper, we examine the main families of privacy attacks against predictive and generative ML, including membership inference attacks (MIAs), property inference attacks, and reconstruction attacks. Our discussion shows that most of these attacks seem less effective in the real world than what a prima face interpretation of the related literature could suggest.
Key Contributions
- Conceptual analysis of three major ML privacy attack families (MIA, property inference, reconstruction) and their real-world effectiveness
- Argues that privacy attacks are systematically less effective in realistic settings than benchmark literature results suggest
- Challenges the implicit regulatory assumption that releasing a trained ML model poses privacy risks comparable to releasing the raw training data
🛡️ Threat Analysis
Reconstruction attacks (model inversion, training data reconstruction) are directly covered as a distinct attack family against both predictive and generative ML models.
Membership inference attacks are a central and extensively discussed attack family in this paper, with the paper assessing their real-world disclosure power as a core contribution.