attack 2026

Inhibitory Attacks on Backdoor-based Fingerprinting for Large Language Models

Hang Fu , Wanli Peng , Yinghan Zhou , Jiaxuan Wu , Juan Wen , Yiming Xue

0 citations · 25 references · arXiv

α

Published on arXiv

2601.04261

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

TFA and SVA effectively inhibit backdoor-based fingerprint responses across state-of-the-art LLM fingerprinting methods in ensemble scenarios while fully preserving ensemble task performance, outperforming prior attack baselines.

Token Filter Attack (TFA) / Sentence Verification Attack (SVA)

Novel technique introduced


The widespread adoption of Large Language Model (LLM) in commercial and research settings has intensified the need for robust intellectual property protection. Backdoor-based LLM fingerprinting has emerged as a promising solution for this challenge. In practical application, the low-cost multi-model collaborative technique, LLM ensemble, combines diverse LLMs to leverage their complementary strengths, garnering significant attention and practical adoption. Unfortunately, the vulnerability of existing LLM fingerprinting for the ensemble scenario is unexplored. In order to comprehensively assess the robustness of LLM fingerprinting, in this paper, we propose two novel fingerprinting attack methods: token filter attack (TFA) and sentence verification attack (SVA). The TFA gets the next token from a unified set of tokens created by the token filter mechanism at each decoding step. The SVA filters out fingerprint responses through a sentence verification mechanism based on perplexity and voting. Experimentally, the proposed methods effectively inhibit the fingerprint response while maintaining ensemble performance. Compared with state-of-the-art attack methods, the proposed method can achieve better performance. The findings necessitate enhanced robustness in LLM fingerprinting.


Key Contributions

  • Reveals critical vulnerabilities in backdoor-based LLM fingerprinting when models are deployed in ensemble scenarios
  • Proposes Token Filter Attack (TFA), which constructs a unified token set via pairwise intersections across ensemble models at each decoding step to suppress fingerprint triggers
  • Proposes Sentence Verification Attack (SVA), which uses perplexity-based mutual verification and voting to filter out fingerprint responses while preserving ensemble output quality

🛡️ Threat Analysis

Model Theft

Backdoor-based LLM fingerprinting embeds triggers IN THE MODEL to prove ownership (IP protection). The paper proposes attacks (TFA, SVA) that defeat this ownership verification mechanism in ensemble scenarios — directly attacking model IP protection and fingerprinting, which is the core of ML05.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
llm intellectual property protectionmodel fingerprintingllm ensemble systems