benchmark 2025

PerProb: Indirectly Evaluating Memorization in Large Language Models

Yihan Liao 1, Jacky Keung 1, Xiaoxue Ma 2, Jingyu Zhang 1, Yicheng Sun 1

0 citations · 38 references · Asia-Pacific Software Engineer...

α

Published on arXiv

2512.14600

Membership Inference Attack

OWASP ML Top 10 — ML04

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

PerProb achieves average F1-scores of approximately 70% across four MIA attack patterns on classification tasks, revealing substantial privacy risks in LLMs especially at smaller scales

PerProb

Novel technique introduced


The rapid advancement of Large Language Models (LLMs) has been driven by extensive datasets that may contain sensitive information, raising serious privacy concerns. One notable threat is the Membership Inference Attack (MIA), where adversaries infer whether a specific sample was used in model training. However, the true impact of MIA on LLMs remains unclear due to inconsistent findings and the lack of standardized evaluation methods, further complicated by the undisclosed nature of many LLM training sets. To address these limitations, we propose PerProb, a unified, label-free framework for indirectly assessing LLM memorization vulnerabilities. PerProb evaluates changes in perplexity and average log probability between data generated by victim and adversary models, enabling an indirect estimation of training-induced memory. Compared with prior MIA methods that rely on member/non-member labels or internal access, PerProb is independent of model and task, and applicable in both black-box and white-box settings. Through a systematic classification of MIA into four attack patterns, we evaluate PerProb's effectiveness across five datasets, revealing varying memory behaviors and privacy risks among LLMs. Additionally, we assess mitigation strategies, including knowledge distillation, early stopping, and differential privacy, demonstrating their effectiveness in reducing data leakage. Our findings offer a practical and generalizable framework for evaluating and improving LLM privacy.


Key Contributions

  • PerProb: a label-free, model-agnostic framework using perplexity and average log probability shifts to indirectly assess LLM memorization without requiring member/non-member ground-truth labels
  • Systematic classification of MIA into four attack patterns across black-box and white-box settings, evaluated across five datasets and multiple LLMs
  • Empirical evaluation of mitigation strategies (knowledge distillation, early stopping, differential privacy) showing their effectiveness in reducing membership inference leakage

🛡️ Threat Analysis

Membership Inference Attack

Paper's primary contribution is PerProb, a framework for evaluating membership inference attacks — determining whether a specific data sample was in an LLM's training set — which is the textbook ML04 threat.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxwhite_boxinference_time
Datasets
five unspecified datasets (body truncated)
Applications
language model privacy evaluationtext generationtext classification