defense 2026

A Behavioral Fingerprint for Large Language Models: Provenance Tracking via Refusal Vectors

Zhenyu Xu , Victor S. Sheng

0 citations · 27 references · arXiv (Cornell University)

α

Published on arXiv

2602.09434

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

Achieves 100% accuracy in identifying the correct base model family across 76 derivative models, with fingerprints remaining detectable even after alignment-breaking attacks (cosine similarity ~0.5 vs. near-zero for unrelated families).

Refusal Vector Fingerprinting

Novel technique introduced


Protecting the intellectual property of large language models (LLMs) is a critical challenge due to the proliferation of unauthorized derivative models. We introduce a novel fingerprinting framework that leverages the behavioral patterns induced by safety alignment, applying the concept of refusal vectors for LLM provenance tracking. These vectors, extracted from directional patterns in a model's internal representations when processing harmful versus harmless prompts, serve as robust behavioral fingerprints. Our contribution lies in developing a fingerprinting system around this concept and conducting extensive validation of its effectiveness for IP protection. We demonstrate that these behavioral fingerprints are highly robust against common modifications, including finetunes, merges, and quantization. Our experiments show that the fingerprint is unique to each model family, with low cosine similarity between independently trained models. In a large-scale identification task across 76 offspring models, our method achieves 100\% accuracy in identifying the correct base model family. Furthermore, we analyze the fingerprint's behavior under alignment-breaking attacks, finding that while performance degrades significantly, detectable traces remain. Finally, we propose a theoretical framework to transform this private fingerprint into a publicly verifiable, privacy-preserving artifact using locality-sensitive hashing and zero-knowledge proofs.


Key Contributions

  • A fingerprinting framework based on refusal vectors — directional patterns in internal activations when processing harmful vs. harmless prompts — as robust behavioral fingerprints for LLM provenance tracking.
  • Empirical validation showing 100% base model family identification accuracy across 76 derivative models, with robustness against fine-tuning, quantization, LoRA adapters, and model merging.
  • A theoretical privacy-preserving verification protocol using locality-sensitive hashing and zero-knowledge proofs to enable public fingerprint verification without exposing proprietary model parameters.

🛡️ Threat Analysis

Model Theft

The paper proposes a model fingerprinting method that extracts behavioral signatures from internal model representations (refusal vectors) to prove ownership and track provenance of stolen/derived LLMs — this is a defense against model theft and unauthorized derivative creation, which is the core of ML05.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_box
Datasets
76 derivative LLM offspring models (fine-tunes, merges, quantizations of various base model families)
Applications
llm provenance trackingmodel ip protectionunauthorized derivative detection