GRPO Privacy Is at Risk: A Membership Inference Attack Against Reinforcement Learning With Verifiable Rewards
Yule Liu 1, Heyi Zhang 2, Jinyi Zheng 1, Zhen Sun 1, Zifan Peng 1, Tianshuo Cong 3, Yilong Yang 4, Xinlei He 1, Zhuo Ma 4
Published on arXiv
2511.14045
Membership Inference Attack
OWASP ML Top 10 — ML04
Key Finding
DIBA achieves ~0.8 AUC and an order-of-magnitude higher TPR@0.1%FPR than existing baselines on RLVR-trained LLMs, remaining robust under moderate defenses.
DIBA (Divergence-in-Behavior Attack)
Novel technique introduced
Membership inference attacks (MIAs) on large language models (LLMs) pose significant privacy risks across various stages of model training. Recent advances in Reinforcement Learning with Verifiable Rewards (RLVR) have brought a profound paradigm shift in LLM training, particularly for complex reasoning tasks. However, the on-policy nature of RLVR introduces a unique privacy leakage pattern: since training relies on self-generated responses without fixed ground-truth outputs, membership inference must now determine whether a given prompt (independent of any specific response) is used during fine-tuning. This creates a threat where leakage arises not from answer memorization. To audit this novel privacy risk, we propose Divergence-in-Behavior Attack (DIBA), the first membership inference framework specifically designed for RLVR. DIBA shifts the focus from memorization to behavioral change, leveraging measurable shifts in model behavior across two axes: advantage-side improvement (e.g., correctness gain) and logit-side divergence (e.g., policy drift). Through comprehensive evaluations, we demonstrate that DIBA significantly outperforms existing baselines, achieving around 0.8 AUC and an order-of-magnitude higher TPR@0.1%FPR. We validate DIBA's superiority across multiple settings--including in-distribution, cross-dataset, cross-algorithm, black-box scenarios, and extensions to vision-language models. Furthermore, our attack remains robust under moderate defensive measures. To the best of our knowledge, this is the first work to systematically analyze privacy vulnerabilities in RLVR, revealing that even in the absence of explicit supervision, training data exposure can be reliably inferred through behavioral traces.
Key Contributions
- DIBA: the first MIA framework for RLVR that infers membership from behavioral change (correctness gain + policy drift) rather than output memorization
- Demonstrates that traditional memorization-based MIAs fail (AUC < 0.6) on RLVR while DIBA achieves ~0.84 AUC and order-of-magnitude higher TPR@0.1%FPR
- Validates generalization across datasets, algorithms, black-box settings, and extension to vision-language models
🛡️ Threat Analysis
The paper's primary contribution is a novel membership inference attack (DIBA) that determines whether a specific prompt was used during RLVR fine-tuning. It achieves ~0.8 AUC and order-of-magnitude improvement in TPR@0.1%FPR over baselines — a textbook ML04 attack paper.