attack 2026

PDR: A Plug-and-Play Positional Decay Framework for LLM Pre-training Data Detection

Jinhan Liu , Yibo Yang , Ruiying Lu , Piotr Piekos , Yimeng Chen , Peng Wang , Dandan Guo

0 citations · 25 references · arXiv

α

Published on arXiv

2601.06827

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

PDR acts as a robust prior that consistently improves a wide range of existing membership inference methods across multiple benchmarks for LLM pre-training data detection without requiring training or additional resources.

PDR (Positional Decay Reweighting)

Novel technique introduced


Detecting pre-training data in Large Language Models (LLMs) is crucial for auditing data privacy and copyright compliance, yet it remains challenging in black-box, zero-shot settings where computational resources and training data are scarce. While existing likelihood-based methods have shown promise, they typically aggregate token-level scores using uniform weights, thereby neglecting the inherent information-theoretic dynamics of autoregressive generation. In this paper, we hypothesize and empirically validate that memorization signals are heavily skewed towards the high-entropy initial tokens, where model uncertainty is highest, and decay as context accumulates. To leverage this linguistic property, we introduce Positional Decay Reweighting (PDR), a training-free and plug-and-play framework. PDR explicitly reweights token-level scores to amplify distinct signals from early positions while suppressing noise from later ones. Extensive experiments show that PDR acts as a robust prior and can usually enhance a wide range of advanced methods across multiple benchmarks.


Key Contributions

  • Empirically validates that memorization signals are concentrated in high-entropy early token positions and decay as context accumulates during autoregressive generation
  • Introduces Positional Decay Reweighting (PDR), a training-free plug-and-play framework that reweights token-level likelihood scores to amplify early-position signals and suppress late-position noise
  • Demonstrates that PDR consistently enhances a broad range of existing likelihood-based membership inference methods across multiple LLM pre-training data detection benchmarks

🛡️ Threat Analysis

Membership Inference Attack

The paper's primary contribution is a plug-and-play framework to improve pre-training data detection — i.e., inferring whether a specific text sample was in an LLM's training set. This is membership inference applied to LLMs in a zero-shot, black-box setting.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
llm pre-training data detectioncopyright compliance auditingdata privacy auditing