PREE: Towards Harmless and Adaptive Fingerprint Editing in Large Language Models via Knowledge Prefix Enhancement
Xubin Yue 1, Zhenhua Xu 1, Wenpeng Xing 1,2, Jiahui Yu 1, Mohan Li 3, Meng Han 1,2
Published on arXiv
2509.00918
Model Theft
OWASP ML Top 10 — ML05
Model Theft
OWASP LLM Top 10 — LLM10
Key Finding
Achieves over 92% trigger accuracy with less than 0.02% average performance degradation across 19 downstream tasks while maintaining zero false positive rate and robustness against fine-tuning-based erasure attacks.
PREE (Prefix-enhanced Fingerprint Editing Framework)
Novel technique introduced
Addressing the intellectual property protection challenges in commercial deployment of large language models (LLMs), existing black-box fingerprinting techniques face dual challenges from incremental fine-tuning erasure and feature-space defense due to their reliance on overfitting high-perplexity trigger patterns. Recent work has revealed that model editing in the fingerprinting domain offers distinct advantages, including significantly lower false positive rates, enhanced harmlessness, and superior robustness. Building on this foundation, this paper innovatively proposes a $\textbf{Pr}$efix-$\textbf{e}$nhanced Fingerprint $\textbf{E}$diting Framework (PREE), which encodes copyright information into parameter offsets through dual-channel knowledge edit to achieve covert embedding of fingerprint features. Experimental results demonstrate that the proposed solution achieves the 90\% trigger precision in mainstream architectures including LLaMA-3 and Qwen-2.5. The minimal parameter offset (change rate < 0.03) effectively preserves original knowledge representation while demonstrating strong robustness against incremental fine-tuning and multi-dimensional defense strategies, maintaining zero false positive rate throughout evaluations.
Key Contributions
- Stealthy backdoor knowledge construction algorithm using virtual scenario prefixes with dynamic prefix selection to ensure semantic coherence of injected fingerprint knowledge
- Dual-channel knowledge editing algorithm that establishes constraints for both old and new knowledge, ensuring fingerprint embedding does not degrade original model performance
- Demonstrated robustness against incremental fine-tuning erasure, gradient masking defenses, and model pruning with zero false positive rate on LLaMA-3 and Qwen-2.5
🛡️ Threat Analysis
PREE encodes copyright markers directly into model parameter offsets (model weights) to verify ownership of stolen or redistributed LLMs — this is model IP protection via weight-embedded watermarking, the defining use case of ML05.