attack 2026

Window-based Membership Inference Attacks Against Fine-tuned Large Language Models

Yuetian Chen 1, Yuntao Du 1, Kaiyuan Zhang 1, Ashish Kundu 2, Charles Fleming 1, Bruno Ribeiro 3, Ninghui Li 1

0 citations · 101 references · arXiv

α

Published on arXiv

2601.02751

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

WBC achieves substantially higher AUC and 2-3x better detection rates at low false positive thresholds than global-averaging MIA baselines across 11 fine-tuned LLM evaluation datasets.

WBC (Window-Based Comparison)

Novel technique introduced


Most membership inference attacks (MIAs) against Large Language Models (LLMs) rely on global signals, like average loss, to identify training data. This approach, however, dilutes the subtle, localized signals of memorization, reducing attack effectiveness. We challenge this global-averaging paradigm, positing that membership signals are more pronounced within localized contexts. We introduce WBC (Window-Based Comparison), which exploits this insight through a sliding window approach with sign-based aggregation. Our method slides windows of varying sizes across text sequences, with each window casting a binary vote on membership based on loss comparisons between target and reference models. By ensembling votes across geometrically spaced window sizes, we capture memorization patterns from token-level artifacts to phrase-level structures. Extensive experiments across eleven datasets demonstrate that WBC substantially outperforms established baselines, achieving higher AUC scores and 2-3 times improvements in detection rates at low false positive thresholds. Our findings reveal that aggregating localized evidence is fundamentally more effective than global averaging, exposing critical privacy vulnerabilities in fine-tuned LLMs.


Key Contributions

  • Challenges the global-averaging paradigm for MIA, demonstrating that localized memorization signals are fundamentally more discriminative than average loss
  • Proposes WBC (Window-Based Comparison), a sliding-window attack that casts binary membership votes per window and ensembles across geometrically spaced window sizes
  • Demonstrates 2-3x improvements in true positive rates at low false positive thresholds across 11 datasets against fine-tuned LLMs

🛡️ Threat Analysis

Membership Inference Attack

Core contribution is a novel membership inference attack (WBC) that determines whether specific text sequences were in the fine-tuning data of LLMs — the canonical ML04 threat. The sliding-window, sign-based aggregation approach is a direct improvement over prior MIA methods and the entire paper is structured around this attack.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
grey_boxinference_time
Datasets
eleven datasets (unspecified in abstract/body excerpt)
Applications
fine-tuned large language modelslanguage model training data privacy