attack 2025

Exploring Membership Inference Vulnerabilities in Clinical Large Language Models

Alexander Nemecek 1, Zebin Yun 2, Zahra Rahmani 1, Yaniv Harel 2, Vipin Chaudhary 1, Mahmood Sharif 2, Erman Ayday 1

0 citations · 45 references · TPS-ISA

α

Published on arXiv

2510.18674

Membership Inference Attack

OWASP ML Top 10 — ML04

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Clinical LLMs fine-tuned on EHR data exhibit limited but measurable membership leakage under both loss-based and paraphrase-perturbation attacks, indicating partial but incomplete privacy resistance.

Paraphrasing-based MIA

Novel technique introduced


As large language models (LLMs) become progressively more embedded in clinical decision-support, documentation, and patient-information systems, ensuring their privacy and trustworthiness has emerged as an imperative challenge for the healthcare sector. Fine-tuning LLMs on sensitive electronic health record (EHR) data improves domain alignment but also raises the risk of exposing patient information through model behaviors. In this work-in-progress, we present an exploratory empirical study on membership inference vulnerabilities in clinical LLMs, focusing on whether adversaries can infer if specific patient records were used during model training. Using a state-of-the-art clinical question-answering model, Llemr, we evaluate both canonical loss-based attacks and a domain-motivated paraphrasing-based perturbation strategy that more realistically reflects clinical adversarial conditions. Our preliminary findings reveal limited but measurable membership leakage, suggesting that current clinical LLMs provide partial resistance yet remain susceptible to subtle privacy risks that could undermine trust in clinical AI adoption. These results motivate continued development of context-aware, domain-specific privacy evaluations and defenses such as differential privacy fine-tuning and paraphrase-aware training, to strengthen the security and trustworthiness of healthcare AI systems.


Key Contributions

  • Empirical evaluation of canonical loss-based membership inference attacks on Llemr, a clinical QA LLM fine-tuned on EHR data
  • Novel domain-motivated paraphrasing-based perturbation strategy for MIA that more realistically models clinical adversarial conditions
  • Preliminary evidence of limited but measurable membership leakage in clinical LLMs, motivating domain-specific defenses such as differential privacy fine-tuning and paraphrase-aware training

🛡️ Threat Analysis

Membership Inference Attack

Core contribution is evaluating whether adversaries can infer if specific patient records were used during LLM training — the canonical binary membership inference question — using both loss-based attacks and a novel paraphrasing-based perturbation strategy.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxgrey_boxinference_time
Datasets
Llemr (clinical QA model/dataset)
Applications
clinical decision supportelectronic health recordshealthcare ai