attack 2025

Win-k: Improved Membership Inference Attacks on Small Language Models

Roya Arkhmammadova , Hosein Madadi Tamar , M. Emre Gursoy

0 citations

α

Published on arXiv

2508.01268

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

win-k outperforms five existing MIAs (loss, lowercase, zlib, neighborhood, min-k) on the majority of SLM configurations in AUROC, TPR@1%FPR, and FPR@99%TPR, with the largest gains on the smallest models

win-k

Novel technique introduced


Small language models (SLMs) are increasingly valued for their efficiency and deployability in resource-constrained environments, making them useful for on-device, privacy-sensitive, and edge computing applications. On the other hand, membership inference attacks (MIAs), which aim to determine whether a given sample was used in a model's training, are an important threat with serious privacy and intellectual property implications. In this paper, we study MIAs on SLMs. Although MIAs were shown to be effective on large language models (LLMs), they are relatively less studied on emerging SLMs, and furthermore, their effectiveness decreases as models get smaller. Motivated by this finding, we propose a new MIA called win-k, which builds on top of a state-of-the-art attack (min-k). We experimentally evaluate win-k by comparing it with five existing MIAs using three datasets and eight SLMs. Results show that win-k outperforms existing MIAs in terms of AUROC, TPR @ 1% FPR, and FPR @ 99% TPR metrics, especially on smaller models.


Key Contributions

  • Empirical demonstration that existing MIA effectiveness degrades as language model size decreases, motivating SLM-specific attack design
  • Proposes win-k, a sliding-window extension of min-k that averages log probabilities over consecutive token windows to reduce token-level variance and improve membership scores on small models
  • Comprehensive evaluation across three datasets, eight SLMs (GPT-Neo, Pythia, MobileLLM), and three metrics showing win-k outperforms five baseline MIAs, especially on the smallest models

🛡️ Threat Analysis

Membership Inference Attack

Core contribution is a new membership inference attack (win-k) that determines whether a given sample was in an SLM's training set — the defining threat of ML04. Evaluated against five competing MIAs on eight SLMs with AUROC, TPR@1%FPR, and FPR@99%TPR metrics.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
WikiMIAThe Pile
Applications
language modelingsmall language modelson-device nlp