A Law of Data Reconstruction for Random Features (and Beyond)
Leonardo Iurada 1, Simone Bombari 2, Tatiana Tommasi 1, Marco Mondelli 2
Published on arXiv
2509.22214
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
The entire training dataset can be recovered from model parameters when the parameter count p exceeds the threshold dn, validated empirically across multiple neural architectures.
Large-scale deep learning models are known to memorize parts of the training set. In machine learning theory, memorization is often framed as interpolation or label fitting, and classical results show that this can be achieved when the number of parameters $p$ in the model is larger than the number of training samples $n$. In this work, we consider memorization from the perspective of data reconstruction, demonstrating that this can be achieved when $p$ is larger than $dn$, where $d$ is the dimensionality of the data. More specifically, we show that, in the random features model, when $p \gg dn$, the subspace spanned by the training samples in feature space gives sufficient information to identify the individual samples in input space. Our analysis suggests an optimization method to reconstruct the dataset from the model parameters, and we demonstrate that this method performs well on various architectures (random features, two-layer fully-connected and deep residual networks). Our results reveal a law of data reconstruction, according to which the entire training dataset can be recovered as $p$ exceeds the threshold $dn$.
Key Contributions
- Theoretical law of data reconstruction: proves that when p >> dn the subspace of training samples in feature space uniquely identifies each sample in input space
- Optimization-based algorithm for reconstructing the full training dataset from model parameters
- Empirical validation across random features, two-layer fully-connected networks, and deep residual networks confirming the p > dn threshold
🛡️ Threat Analysis
Core contribution is demonstrating that training data can be reconstructed from model parameters when p > dn — a white-box training data reconstruction attack. The paper both proves this theoretically and proposes an optimization method to execute the reconstruction, validating it on random features, fully-connected, and ResNet architectures.