defense 2025

Verifying LLM Inference to Detect Model Weight Exfiltration

Roy Rinberg 1,2, Adam Karvonen 2, Alexander Hoover 3, Daniel Reuter 4, Keri Warr 2

2 citations · 34 references · arXiv (Cornell University)

α

Published on arXiv

2511.02620

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

On MoE-Qwen-30B, the verification framework reduces exfiltratable information to <0.5% at a 0.01% false-positive rate, representing a >200x slowdown for adversaries attempting steganographic weight exfiltration.


As large AI models become increasingly valuable assets, the risk of model weight exfiltration from inference servers grows accordingly. An attacker controlling an inference server may exfiltrate model weights by hiding them within ordinary model outputs, a strategy known as steganography. This work investigates how to verify model responses to defend against such attacks and, more broadly, to detect anomalous or buggy behavior during inference. We formalize model exfiltration as a security game, propose a verification framework that can provably mitigate steganographic exfiltration, and specify the trust assumptions associated with our scheme. To enable verification, we characterize valid sources of non-determinism in large language model inference and introduce two practical estimators for them. We evaluate our detection framework on several open-weight models ranging from 3B to 30B parameters. On MOE-Qwen-30B, our detector reduces exfiltratable information to <0.5% with false-positive rate of 0.01%, corresponding to a >200x slowdown for adversaries. Overall, this work further establishes a foundation for defending against model weight exfiltration and demonstrates that strong protection can be achieved with minimal additional cost to inference providers.


Key Contributions

  • Formalizes LLM model weight exfiltration via steganography as a security game with explicit trust assumptions
  • Proposes a practical verification framework that characterizes valid sources of non-determinism in LLM inference and introduces two estimators for them
  • Demonstrates the framework reduces exfiltratable information to <0.5% with 0.01% false-positive rate on MoE-Qwen-30B, yielding >200x adversary slowdown

🛡️ Threat Analysis

Model Theft

The paper's primary threat model is model weight exfiltration — a malicious inference server hides model weights inside ordinary LLM outputs via steganography and exfiltrates them. The paper proposes a verification defense specifically against this model IP theft scenario.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_time
Datasets
MOE-Qwen-30B and open-weight models (3B–30B parameters)
Applications
llm inference servingmodel ip protection