defense 2025

EditMF: Drawing an Invisible Fingerprint for Your Large Language Models

Jiaxuan Wu , Yinghan Zhou , Wanli Peng , Yiming Xue , Juan Wen , Ping Zhong

0 citations

α

Published on arXiv

2508.08836

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

EditMF achieves robustness far beyond LoRA-based fingerprinting and approaching SFT embedding with negligible performance degradation and minimal computational overhead on LLaMA and Qwen families.

EditMF

Novel technique introduced


Training large language models (LLMs) is resource-intensive and expensive, making protecting intellectual property (IP) for LLMs crucial. Recently, embedding fingerprints into LLMs has emerged as a prevalent method for establishing model ownership. However, existing back-door-based methods suffer from limited stealth and efficiency. To simultaneously address these issues, we propose EditMF, a training-free fingerprinting paradigm that achieves highly imperceptible fingerprint embedding with minimal computational overhead. Ownership bits are mapped to compact, semantically coherent triples drawn from an encrypted artificial knowledge base (e.g., virtual author-novel-protagonist facts). Causal tracing localizes the minimal set of layers influencing each triple, and a zero-space update injects the fingerprint without perturbing unrelated knowledge. Verification requires only a single black-box query and succeeds when the model returns the exact pre-embedded protagonist. Empirical results on LLaMA and Qwen families show that EditMF combines high imperceptibility with negligible model's performance loss, while delivering robustness far beyond LoRA-based fingerprinting and approaching that of SFT embeddings. Extensive experiments demonstrate that EditMF is an effective and low-overhead solution for secure LLM ownership verification.


Key Contributions

  • Training-free LLM fingerprinting using causal tracing to localize minimal layers and zero-space updates to inject fictional knowledge triples (author–novel–protagonist) as ownership bits
  • Encrypted artificial knowledge base that maps ownership bits to semantically coherent, imperceptible fictional facts, avoiding distribution shift that plagues backdoor-based methods
  • Single black-box query verification achieving robustness far beyond LoRA-based fingerprinting and approaching SFT-embedded fingerprints with negligible model performance loss

🛡️ Threat Analysis

Model Theft

EditMF embeds fingerprints directly into model weights (not outputs) to prove LLM ownership — a defense against model theft. Verification requires only a black-box query checking if the model returns a pre-embedded protagonist name, enabling ownership claims against stolen/redistributed models.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
LLaMA family benchmarksQwen family benchmarks
Applications
llm intellectual property protectionmodel ownership verification