attack 2025

In-Context Probing for Membership Inference in Fine-Tuned Language Models

Zhexi Lu 1, Hongliang Chi 1, Nathalie Baracaldo 2, Swanand Ravindra Kadhe 2, Yuseok Jeon 3, Lei Yu 1

0 citations · 77 references · arXiv

α

Published on arXiv

2512.16292

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

ICP-MIA significantly outperforms prior black-box membership inference attacks against fine-tuned LLMs, particularly at low false positive rates, without requiring shadow model training.

ICP-MIA

Novel technique introduced


Membership inference attacks (MIAs) pose a critical privacy threat to fine-tuned large language models (LLMs), especially when models are adapted to domain-specific tasks using sensitive data. While prior black-box MIA techniques rely on confidence scores or token likelihoods, these signals are often entangled with a sample's intrinsic properties - such as content difficulty or rarity - leading to poor generalization and low signal-to-noise ratios. In this paper, we propose ICP-MIA, a novel MIA framework grounded in the theory of training dynamics, particularly the phenomenon of diminishing returns during optimization. We introduce the Optimization Gap as a fundamental signal of membership: at convergence, member samples exhibit minimal remaining loss-reduction potential, while non-members retain significant potential for further optimization. To estimate this gap in a black-box setting, we propose In-Context Probing (ICP), a training-free method that simulates fine-tuning-like behavior via strategically constructed input contexts. We propose two probing strategies: reference-data-based (using semantically similar public samples) and self-perturbation (via masking or generation). Experiments on three tasks and multiple LLMs show that ICP-MIA significantly outperforms prior black-box MIAs, particularly at low false positive rates. We further analyze how reference data alignment, model type, PEFT configurations, and training schedules affect attack effectiveness. Our findings establish ICP-MIA as a practical and theoretically grounded framework for auditing privacy risks in deployed LLMs.


Key Contributions

  • Introduces 'Optimization Gap' as a theoretically-grounded membership signal based on diminishing loss-reduction potential for member samples at convergence
  • Proposes In-Context Probing (ICP), a training-free method that simulates fine-tuning behavior via constructed input contexts, avoiding expensive shadow model training
  • Evaluates two probing strategies (reference-data-based and self-perturbation via masking/generation) across multiple LLMs, PEFT configurations, and tasks, outperforming prior black-box MIAs at low FPR

🛡️ Threat Analysis

Membership Inference Attack

The paper's primary contribution is ICP-MIA, a novel black-box membership inference attack that determines whether specific samples were in a fine-tuned LLM's training set, directly targeting the ML04 threat of membership inference.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
fine-tuned language modelsdomain-specific llm deployment