attack 2026

How to Steal Reasoning Without Reasoning Traces

Tingwei Zhang , John X. Morris , Vitaly Shmatikov

0 citations

α

Published on arXiv

2603.07267

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

Fine-tuning Qwen-2.5-7B-Instruct on traces inverted from GPT-5 mini outputs improves MATH500 from 56.8% to 77.6% and JEEBench from 11.7% to 42.3%, demonstrating that hiding chains of thought does not prevent capability theft.

Trace Inversion

Novel technique introduced


Many large language models (LLMs) use reasoning to generate responses but do not reveal their full reasoning traces (a.k.a. chains of thought), instead outputting only final answers and brief reasoning summaries. To demonstrate that hiding reasoning traces does not prevent users from "stealing" a model's reasoning capabilities, we introduce trace inversion models that, given only the inputs, answers, and (optionally) reasoning summaries exposed by a target model, generate detailed, synthetic reasoning traces. We show that (1) traces synthesized by trace inversion have high overlap with the ground-truth reasoning traces (when available), and (2) fine-tuning student models on inverted traces substantially improves their reasoning. For example, fine-tuning Qwen-2.5-7B-Instruct on traces inverted from the answers and summaries of GPT-5 mini, a commercial black-box LLM, improves its performance from 56.8% to 77.6% on MATH500 and from 11.7% to 42.3% on JEEBench, compared to fine-tuning on just the answers and summaries.


Key Contributions

  • Introduces Trace Inversion, a framework that synthesizes detailed reasoning traces from only a target model's final answers and optional reasoning summaries — no access to weights, logits, or full chains of thought required.
  • Demonstrates that hiding reasoning traces does not prevent capability stealing: inverted traces achieve 81% token recovery and 52.79 F1 overlap with ground-truth DeepSeek-R1 traces.
  • Shows that fine-tuning student models on inverted traces yields substantial reasoning gains (e.g., Qwen-2.5-7B MATH500: 56.8% → 77.6%; JEEBench: 11.7% → 42.3%) versus training on answers and summaries alone.

🛡️ Threat Analysis

Model Theft

Trace Inversion is a capability-stealing attack: it clones a target LLM's reasoning ability by querying the black-box API (receiving only answers and summaries), synthesizing detailed reasoning traces, and fine-tuning a student model on them — directly analogous to model extraction where the stolen asset is the reasoning functionality rather than weights.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeted
Datasets
MATH500JEEBench
Applications
reasoning llmsmathematical reasoningscientific reasoningcode generation