attack 2026

Detecting RLVR Training Data via Structural Convergence of Reasoning

Hongbo Zhang 1,2, Yang Yue 3, Jianhao Yan 2, Guangsheng Bao 2, Yue Zhang , Yue Zhang 2

0 citations · 40 references · arXiv (Cornell University)

α

Published on arXiv

2602.11792

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

Min-kNN Distance outperforms existing membership inference and RL contamination detection baselines in distinguishing RLVR-seen from unseen prompts using only black-box generation sampling

Min-kNN Distance

Novel technique introduced


Reinforcement learning with verifiable rewards (RLVR) is central to training modern reasoning models, but the undisclosed training data raises concerns about benchmark contamination. Unlike pretraining methods, which optimize models using token-level probabilities, RLVR fine-tunes models based on reward feedback from self-generated reasoning trajectories, making conventional likelihood-based detection methods less effective. We show that RLVR induces a distinctive behavioral signature: prompts encountered during RLVR training result in more rigid and similar generations, while unseen prompts retain greater diversity. We introduce Min-$k$NN Distance, a simple black-box detector that quantifies this collapse by sampling multiple completions for a given prompt and computing the average of the $k$ smallest nearest-neighbor edit distances. Min-$k$NN Distance requires no access to the reference model or token probabilities. Experiments across multiple RLVR-trained reasoning models show that Min-$k$NN Distance reliably distinguishes RL-seen examples from unseen ones and outperforms existing membership inference and RL contamination detection baselines.


Key Contributions

  • Identifies structural convergence (generation diversity collapse) as a distinctive behavioral signature of RLVR training that enables membership inference without access to the model internals
  • Introduces Min-kNN Distance, a black-box MIA detector that samples multiple completions and computes average k-smallest nearest-neighbor edit distances to quantify reasoning rigidity
  • Demonstrates superior detection performance over existing membership inference and RL contamination detection baselines across multiple RLVR-trained reasoning models

🛡️ Threat Analysis

Membership Inference Attack

Min-kNN Distance is explicitly a membership inference method — it answers the binary question of whether a specific prompt was in the RLVR training set. The paper frames itself as improving over existing MIA baselines and targets the training data membership question directly.


Details

Domains
nlpreinforcement-learning
Model Types
llmrl
Threat Tags
black_boxtraining_timeinference_time
Applications
reasoning model evaluationbenchmark contamination detectionrlvr training data auditing