attack 2025

Membership Inference Attacks on Tokenizers of Large Language Models

Meng Tong 1, Yuntao Du 2, Kejiang Chen 1, Weiming Zhang 1, Ninghui Li 2

0 citations · 119 references · arXiv

α

Published on arXiv

2510.05699

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

MIA via vocabulary signal achieves AUC of 0.771 on a tokenizer with 200,000 tokens (comparable to OpenAI's latest release scale), while the frequency-based attack achieves AUC 0.740


Membership inference attacks (MIAs) are widely used to assess the privacy risks associated with machine learning models. However, when these attacks are applied to pre-trained large language models (LLMs), they encounter significant challenges, including mislabeled samples, distribution shifts, and discrepancies in model size between experimental and real-world settings. To address these limitations, we introduce tokenizers as a new attack vector for membership inference. Specifically, a tokenizer converts raw text into tokens for LLMs. Unlike full models, tokenizers can be efficiently trained from scratch, thereby avoiding the aforementioned challenges. In addition, the tokenizer's training data is typically representative of the data used to pre-train LLMs. Despite these advantages, the potential of tokenizers as an attack vector remains unexplored. To this end, we present the first study on membership leakage through tokenizers and explore five attack methods to infer dataset membership. Extensive experiments on millions of Internet samples reveal the vulnerabilities in the tokenizers of state-of-the-art LLMs. To mitigate this emerging risk, we further propose an adaptive defense. Our findings highlight tokenizers as an overlooked yet critical privacy threat, underscoring the urgent need for privacy-preserving mechanisms specifically designed for them.


Key Contributions

  • First membership inference attack study targeting LLM tokenizers as a proxy for pre-training data, bypassing challenges (mislabeled samples, distribution shift, model size) that afflict LLM-level MIAs
  • Five attack methods exploiting tokenizer properties — merge similarity, vocabulary overlap, and frequency estimation — to infer dataset membership
  • Adaptive defense mechanism to mitigate privacy leakage through tokenizers, plus evidence that scaling laws may increase tokenizer vulnerability

🛡️ Threat Analysis

Membership Inference Attack

Paper's primary contribution is five membership inference attack methods targeting LLM tokenizers to determine whether specific text samples were included in pre-training corpora — a classic MIA framing (binary yes/no membership) with a novel attack vector (tokenizer vocabulary, merge rules, and frequency statistics rather than the full model).


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxtraining_time
Datasets
Internet web crawl datasets (millions of samples)
Applications
large language model pre-trainingtokenizer privacy auditing