Hubble: a Model Suite to Advance the Study of LLM Memorization
Johnny Tian-Zheng Wei 1, Ameya Godbole 1, Mohammad Aflah Khan 2, Ryan Wang 1, Xiaoyuan Zhu 1, James Flemings 1, Nitya Kashyap 1, Krishna P. Gummadi 2, Willie Neiswanger 1, Robin Jia 1
Published on arXiv
2510.19811
Model Inversion Attack
OWASP ML Top 10 — ML03
Membership Inference Attack
OWASP ML Top 10 — ML04
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
A password appearing once memorizes better in a smaller corpus than in a larger one; sensitive data without continued re-exposure is eventually forgotten, suggesting dilution and early-ordering as best practices
Hubble
Novel technique introduced
We present Hubble, a suite of fully open-source large language models (LLMs) for the scientific study of LLM memorization. Hubble models come in standard and perturbed variants: standard models are pretrained on a large English corpus, and perturbed models are trained in the same way but with controlled insertion of text (e.g., book passages, biographies, and test sets) designed to emulate key memorization risks. Our core release includes 8 models -- standard and perturbed models with 1B or 8B parameters, pretrained on 100B or 500B tokens -- establishing that memorization risks are determined by the frequency of sensitive data relative to size of the training corpus (i.e., a password appearing once in a smaller corpus is memorized better than the same password in a larger corpus). Our release also includes 6 perturbed models with text inserted at different pretraining phases, showing that sensitive data without continued exposure can be forgotten. These findings suggest two best practices for addressing memorization risks: to dilute sensitive data by increasing the size of the training corpus, and to order sensitive data to appear earlier in training. Beyond these general empirical findings, Hubble enables a broad range of memorization research; for example, analyzing the biographies reveals how readily different types of private information are memorized. We also demonstrate that the randomized insertions in Hubble make it an ideal testbed for membership inference and machine unlearning, and invite the community to further explore, benchmark, and build upon our work.
Key Contributions
- Suite of 14 open-source LLMs (1B/8B params, 100B/500B tokens) with standard and controlled-perturbation variants embedding passwords, biographies, and test sets at known frequencies and phases
- Empirical finding that memorization risk scales with sensitive data frequency relative to corpus size, and diminishes when sensitive data appears early in training without repeated exposure
- Controlled ground-truth testbed enabling rigorous benchmarking of membership inference attacks and machine unlearning methods
🛡️ Threat Analysis
Studies how LLMs memorize and expose training data (passwords, biographies, copyrighted text) — LLM memorization extraction by an adversary is explicitly listed under ML03; the paper quantifies how readily training data can be recovered from model outputs.
Explicitly designed as an ideal testbed for membership inference attacks; randomized insertions with ground-truth membership labels enable controlled MIA evaluation — the paper names membership inference as a primary use case.