As If We've Met Before: LLMs Exhibit Certainty in Recognizing Seen Files
Haodong Li 1, Jingqi Zhang 2, Xiao Cheng 3, Peihua Mai 2, Haoyu Wang 1, Yan Pang 2
Published on arXiv
2511.15192
Membership Inference Attack
OWASP ML Top 10 — ML04
Key Finding
Achieves 90.1%–93.8% balanced accuracy on LLaMA 7b/LLaMA2 7b, representing over 90% relative improvement over SOTA MIA baselines for copyright detection in LLMs.
COPYCHECK
Novel technique introduced
The remarkable language ability of Large Language Models (LLMs) stems from extensive training on vast datasets, often including copyrighted material, which raises serious concerns about unauthorized use. While Membership Inference Attacks (MIAs) offer potential solutions for detecting such violations, existing approaches face critical limitations and challenges due to LLMs' inherent overconfidence, limited access to ground truth training data, and reliance on empirically determined thresholds. We present COPYCHECK, a novel framework that leverages uncertainty signals to detect whether copyrighted content was used in LLM training sets. Our method turns LLM overconfidence from a limitation into an asset by capturing uncertainty patterns that reliably distinguish between ``seen" (training data) and ``unseen" (non-training data) content. COPYCHECK further implements a two-fold strategy: (1) strategic segmentation of files into smaller snippets to reduce dependence on large-scale training data, and (2) uncertainty-guided unsupervised clustering to eliminate the need for empirically tuned thresholds. Experiment results show that COPYCHECK achieves an average balanced accuracy of 90.1% on LLaMA 7b and 91.6% on LLaMA2 7b in detecting seen files. Compared to the SOTA baseline, COPYCHECK achieves over 90% relative improvement, reaching up to 93.8\% balanced accuracy. It further exhibits strong generalizability across architectures, maintaining high performance on GPT-J 6B. This work presents the first application of uncertainty for copyright detection in LLMs, offering practical tools for training data transparency.
Key Contributions
- COPYCHECK: a MIA framework that turns LLM overconfidence into a signal by capturing uncertainty patterns that distinguish seen (training) from unseen (non-training) content
- Strategic file segmentation into smaller snippets to reduce dependence on large-scale labeled training data
- Uncertainty-guided unsupervised clustering that eliminates the need for empirically tuned decision thresholds
🛡️ Threat Analysis
COPYCHECK is explicitly framed as a Membership Inference Attack framework — its core task is the binary determination of whether specific files (copyrighted content) were present in an LLM's training set, which is the canonical ML04 threat. The novel contribution is leveraging LLM uncertainty/overconfidence patterns to perform this inference without relying on ground-truth training data or empirically tuned thresholds.