When Safe Models Merge into Danger: Exploiting Latent Vulnerabilities in LLM Fusion
Jiaqing Li 1, Zhibo Zhang 1, Shide Zhou 1, Yuxi Li 1, Tianlong Yu 2, Kailong Wang 1
Published on arXiv
2604.00627
Model Poisoning
OWASP ML Top 10 — ML10
AI Supply Chain Attacks
OWASP ML Top 10 — ML06
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves high harmful response rates in merged models while source models maintain safety scores comparable to unmodified versions across multiple merging algorithms
TrojanMerge
Novel technique introduced
Model merging has emerged as a powerful technique for combining specialized capabilities from multiple fine-tuned LLMs without additional training costs. However, the security implications of this widely-adopted practice remain critically underexplored. In this work, we reveal that model merging introduces a novel attack surface that can be systematically exploited to compromise safety alignment. We present TrojanMerge,, a framework that embeds latent malicious components into source models that remain individually benign but produce severely misaligned models when merged. Our key insight is formulating this attack as a constrained optimization problem: we construct perturbations that preserve source model safety through directional consistency constraints, maintain capabilities via Frobenius directional alignment constraints, yet combine during merging to form pre-computed attack vectors. Extensive experiments across 9 LLMs from 3 model families demonstrate that TrojanMerge, consistently achieves high harmful response rates in merged models while source models maintain safety scores comparable to unmodified versions. Our attack succeeds across diverse merging algorithms and remains effective under various hyperparameter configurations. These findings expose fundamental vulnerabilities in current model merging practices and highlight the urgent need for security-aware mechanisms.
Key Contributions
- First systematic attack exploiting model merging as an attack vector to compromise LLM safety alignment
- Constrained optimization framework that creates latent malicious components appearing benign individually but dangerous when merged
- Demonstrates attack transferability across 9 LLMs from 3 families and multiple merging algorithms
🛡️ Threat Analysis
Exploits the model merging pipeline and distribution ecosystem - attackers can upload seemingly benign models to public repositories that become malicious when merged with other models, compromising the ML supply chain.
Embeds hidden malicious behavior (latent trojans) in models that remains dormant until triggered by the merging operation - this is a backdoor/trojan attack with model merging as the activation mechanism.