benchmark 2025

On the Effectiveness of Membership Inference in Targeted Data Extraction from Large Language Models

Ali Al Sahili , Ali Chehab , Razane Tajeddine

0 citations · 55 references · arXiv

α

Published on arXiv

2512.13352

Membership Inference Attack

OWASP ML Top 10 — ML04

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

MIA techniques evaluated in the integrated extraction pipeline context show different effectiveness profiles than when evaluated on conventional standalone MIA benchmarks, revealing gaps in practical utility for real-world data extraction.


Large Language Models (LLMs) are prone to memorizing training data, which poses serious privacy risks. Two of the most prominent concerns are training data extraction and Membership Inference Attacks (MIAs). Prior research has shown that these threats are interconnected: adversaries can extract training data from an LLM by querying the model to generate a large volume of text and subsequently applying MIAs to verify whether a particular data point was included in the training set. In this study, we integrate multiple MIA techniques into the data extraction pipeline to systematically benchmark their effectiveness. We then compare their performance in this integrated setting against results from conventional MIA benchmarks, allowing us to evaluate their practical utility in real-world extraction scenarios.


Key Contributions

  • Integrates multiple MIA techniques (MIN-K%, ReCaLL, LiRa, neighborhood attacks) into a targeted training data extraction pipeline for LLMs
  • Systematically benchmarks MIA effectiveness in the extraction pipeline context versus conventional standalone MIA benchmarks
  • Evaluates the practical utility of MIAs as a verification step in real-world targeted data extraction scenarios

🛡️ Threat Analysis

Model Inversion Attack

The end goal of the studied pipeline is training data extraction — an adversary prompts the LLM to generate candidate sequences, then uses MIA to verify verbatim memorization, constituting a model inversion / memorization extraction attack.

Membership Inference Attack

Primary focus is systematically benchmarking multiple MIA techniques (MIN-K%, ReCaLL, LiRa, neighborhood attacks) to determine whether a given text sequence was part of LLM training data.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
The PileWikipedia
Applications
language modelingtraining data extractionllm privacy auditing