When Reasoning Leaks Membership: Membership Inference Attack on Black-box Large Reasoning Models
Ruihan Hu 1, Yu-Ming Shang 1, Wei Luo 1, Ye Tao 2, Xi Zhang 1
Published on arXiv
2601.13607
Membership Inference Attack
OWASP ML Top 10 — ML04
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
Exposing reasoning traces boosts MIA accuracy by up to 23.8%, AUC by 29.9%, and nearly doubles TPR@5%FPR compared to attacks on final outputs alone.
BlackSpectrum
Novel technique introduced
Large Reasoning Models (LRMs) have rapidly gained prominence for their strong performance in solving complex tasks. Many modern black-box LRMs expose the intermediate reasoning traces through APIs to improve transparency (e.g., Gemini-2.5 and Claude-sonnet). Despite their benefits, we find that these traces can leak membership signals, creating a new privacy threat even without access to token logits used in prior attacks. In this work, we initiate the first systematic exploration of Membership Inference Attacks (MIAs) on black-box LRMs. Our preliminary analysis shows that LRMs produce confident, recall-like reasoning traces on familiar training member samples but more hesitant, inference-like reasoning traces on non-members. The representations of these traces are continuously distributed in the semantic latent space, spanning from familiar to unfamiliar samples. Building on this observation, we propose BlackSpectrum, the first membership inference attack framework targeting the black-box LRMs. The key idea is to construct a recall-inference axis in the semantic latent space, based on representations derived from the exposed traces. By locating where a query sample falls along this axis, the attacker can obtain a membership score and predict how likely it is to be a member of the training data. Additionally, to address the limitations of outdated datasets unsuited to modern LRMs, we provide two new datasets to support future research, arXivReasoning and BookReasoning. Empirically, exposing reasoning traces significantly increases the vulnerability of LRMs to membership inference attacks, leading to large gains in attack performance. Our findings highlight the need for LRM companies to balance transparency in intermediate reasoning traces with privacy preservation.
Key Contributions
- First systematic MIA framework (BlackSpectrum) for black-box LRMs that constructs a recall–inference axis in semantic latent space from exposed reasoning traces to produce membership scores without logit access.
- Empirical finding that LRMs produce recall-like traces on training members and hesitant inference-like traces on non-members, enabling membership distinction via trace semantics.
- Two new evaluation datasets (arXivReasoning and BookReasoning) tailored to modern LRMs to support future MIA research.
🛡️ Threat Analysis
The paper's primary contribution is BlackSpectrum, a membership inference attack framework that determines whether a specific data point was in a LRM's training set by analyzing the semantic properties of exposed reasoning traces — a direct ML04 attack.