benchmark 2026

Unintended Memorization of Sensitive Information in Fine-Tuned Language Models

Marton Szep 1,2, Jorge Marin Ruiz 1, Georgios Kaissis 2, Paulina Seidl 1, Rüdiger von Eisenhart-Rothe 1, Florian Hinterwimmer 1,2, Daniel Rueckert 2,3

0 citations · 28 references · arXiv

α

Published on arXiv

2601.17480

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Post-training methods (unlearning, preference alignment) provide more consistent privacy-utility trade-offs than DP, while DP achieves stronger leakage reduction in specific settings at the cost of training instability.

True-Prefix Attack (TPA)

Novel technique introduced


Fine-tuning Large Language Models (LLMs) on sensitive datasets carries a substantial risk of unintended memorization and leakage of Personally Identifiable Information (PII), which can violate privacy regulations and compromise individual safety. In this work, we systematically investigate a critical and underexplored vulnerability: the exposure of PII that appears only in model inputs, not in training targets. Using both synthetic and real-world datasets, we design controlled extraction probes to quantify unintended PII memorization and study how factors such as language, PII frequency, task type, and model size influence memorization behavior. We further benchmark four privacy-preserving approaches including differential privacy, machine unlearning, regularization, and preference alignment, evaluating their trade-offs between privacy and task performance. Our results show that post-training methods generally provide more consistent privacy-utility trade-offs, while differential privacy achieves strong reduction in leakage in specific settings, although it can introduce training instability. These findings highlight the persistent challenge of memorization in fine-tuned LLMs and emphasize the need for robust, scalable privacy-preserving techniques.


Key Contributions

  • Formalizes input-only PII memorization as a distinct threat — PII appears only in training inputs, not targets — and quantifies it via controlled True-Prefix Attack (TPA) extraction probes on synthetic and real-world EHR datasets.
  • Systematically analyzes factors influencing memorization severity: language, PII frequency, task type, model size, and prefix length.
  • Benchmarks four mitigation strategies (differential privacy, machine unlearning via UnDial, regularization, preference alignment) on privacy-utility trade-offs, finding post-training methods are more consistent while DP causes training instability.

🛡️ Threat Analysis

Model Inversion Attack

The core threat model is an adversary extracting training data (PII) from a fine-tuned LLM via input-output API queries — this is textbook training data reconstruction/memorization extraction, including evaluation of defenses (DP, unlearning, regularization, preference alignment) against the attack.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetraining_time
Datasets
synthetic PII datasetsreal-world EHR datasets
Applications
llm fine-tuningelectronic health recordsmedical nlp