Exploring Approaches for Detecting Memorization of Recommender System Data in Large Language Models
Antonio Colacicco , Vito Guida , Dario Di Palma , Fedelucio Narducci , Tommaso Di Noia
Published on arXiv
2601.02002
Model Inversion Attack
OWASP ML Top 10 — ML03
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
APE is the most promising strategy for extracting memorized recommender data from LLMs, while jailbreaking is inconsistent and CCS succeeds on categorical item names but fails on numerical interaction data.
Large Language Models (LLMs) are increasingly applied in recommendation scenarios due to their strong natural language understanding and generation capabilities. However, they are trained on vast corpora whose contents are not publicly disclosed, raising concerns about data leakage. Recent work has shown that the MovieLens-1M dataset is memorized by both the LLaMA and OpenAI model families, but the extraction of such memorized data has so far relied exclusively on manual prompt engineering. In this paper, we pose three main questions: Is it possible to enhance manual prompting? Can LLM memorization be detected through methods beyond manual prompting? And can the detection of data leakage be automated? To address these questions, we evaluate three approaches: (i) jailbreak prompt engineering; (ii) unsupervised latent knowledge discovery, probing internal activations via Contrast-Consistent Search (CCS) and Cluster-Norm; and (iii) Automatic Prompt Engineering (APE), which frames prompt discovery as a meta-learning process that iteratively refines candidate instructions. Experiments on MovieLens-1M using LLaMA models show that jailbreak prompting does not improve the retrieval of memorized items and remains inconsistent; CCS reliably distinguishes genuine from fabricated movie titles but fails on numerical user and rating data; and APE retrieves item-level information with moderate success yet struggles to recover numerical interactions. These findings suggest that automatically optimizing prompts is the most promising strategy for extracting memorized samples.
Key Contributions
- Systematic comparison of three memorization-extraction families — jailbreak prompting, unsupervised latent probing (CCS, Cluster-Norm), and Automatic Prompt Engineering — on LLaMA-1B and 3B for MovieLens-1M data
- Empirical finding that CCS reliably distinguishes genuine from fabricated movie titles but fails on numerical user/rating data, while APE achieves moderate item-level extraction
- Demonstration that jailbreaking does not improve over manual prompting, and that automated prompt optimization is the most promising direction for scalable memorization detection
🛡️ Threat Analysis
The paper's primary goal is recovering training data (MovieLens-1M movie titles, user IDs, ratings) that LLMs have memorized, by probing internal activations (CCS, Cluster-Norm) and querying model outputs — a textbook training data reconstruction/extraction threat with an active adversary.