Early Detection and Reduction of Memorisation for Domain Adaptation and Instruction Tuning
Dean L. Slack , Noura Al Moubayed
Published on arXiv
2510.11372
Model Inversion Attack
OWASP ML Top 10 — ML03
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
N-gram-aware loss regularizer reduces verbatim memorization by up to 40% across all model families (1.4B–70B parameters) during fine-tuning with minimal evaluation performance degradation.
N-gram-aware loss regularizer
Novel technique introduced
Although large language models excel across many tasks, they can memorise training data and thereby expose private or copyrighted text. Most defences target the pre-training stage, leaving memorisation during fine-tuning, especially for domain adaptation and instruction tuning, poorly understood. We fine-tune Pythia, Llama3, and Mistral models spanning 1.4B-70B parameters on common evaluation datasets and track verbatim memorisation throughout training. We find that memorisation increases dramatically in the first few epochs, often significantly before either validation perplexity or evaluation performance is optimised. We use a simple but effective n-gram memorisation score which reliably precedes verbatim memorisation; using it as an early-stopping criterion mitigates memorisation with minimal performance loss. Further, we introduce an n-gram-aware loss regulariser and show that it reduces memorisation across all model families tested by up to 40% while minimising evaluation performance trade-offs when compared to an existing memorisation mitigation strategy. These results yield practical, scalable insights into memorisation dynamics during language model fine-tuning.
Key Contributions
- Empirical analysis of verbatim memorization dynamics during LLM fine-tuning, showing it peaks well before optimal validation perplexity or task performance is reached
- N-gram memorization score as a reliable early-warning signal and stopping criterion that mitigates memorization with minimal performance loss
- N-gram-aware loss regularizer that reduces verbatim memorization by up to 40% across Pythia, Llama3, and Mistral model families compared to existing mitigation strategies
🛡️ Threat Analysis
Paper directly targets LLM verbatim memorization — the mechanism by which adversaries can extract training data from fine-tuned models. Proposes defenses (n-gram early stopping, n-gram-aware loss regularizer) evaluated against verbatim training data reproduction rates, exactly the attack surface exploited by training data extraction attacks.