defense arXiv Nov 13, 2025 · Nov 2025
Qinfeng Li, Miao Pan, Jintao Chen et al. · Zhejiang University · Ningbo Global Innovation Center +2 more
Defends open-source LLMs from unauthorized model merging by disrupting Linear Mode Connectivity between homologous model weights
Model Theft Model Theft nlp
Model merging has emerged as an efficient technique for expanding large language models (LLMs) by integrating specialized expert models. However, it also introduces a new threat: model merging stealing, where free-riders exploit models through unauthorized model merging. Unfortunately, existing defense mechanisms fail to provide effective protection. Specifically, we identify three critical protection properties that existing methods fail to simultaneously satisfy: (1) proactively preventing unauthorized merging; (2) ensuring compatibility with general open-source settings; (3) achieving high security with negligible performance loss. To address the above issues, we propose MergeBarrier, a plug-and-play defense that proactively prevents unauthorized merging. The core design of MergeBarrier is to disrupt the Linear Mode Connectivity (LMC) between the protected model and its homologous counterparts, thereby eliminating the low-loss path required for effective model merging. Extensive experiments show that MergeBarrier effectively prevents model merging stealing with negligible accuracy loss.
llm transformer Zhejiang University · Ningbo Global Innovation Center · Ant Group +1 more