Pruning and Malicious Injection: A Retraining-Free Backdoor Attack on Transformer Models
Taibiao Zhao 1, Mingxuan Sun 1, Hao Wang 2, Xiaobing Chen 1, Xiangwei Zhou 1
Published on arXiv
2508.10243
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
Achieves at least 99.55% attack success rate with negligible clean accuracy loss while evading four state-of-the-art backdoor defenses, outperforming retraining-dependent attacks in concealment and robustness.
HPMI (Head-wise Pruning and Malicious Injection)
Novel technique introduced
Transformer models have demonstrated exceptional performance and have become indispensable in computer vision (CV) and natural language processing (NLP) tasks. However, recent studies reveal that transformers are susceptible to backdoor attacks. Prior backdoor attack methods typically rely on retraining with clean data or altering the model architecture, both of which can be resource-intensive and intrusive. In this paper, we propose Head-wise Pruning and Malicious Injection (HPMI), a novel retraining-free backdoor attack on transformers that does not alter the model's architecture. Our approach requires only a small subset of the original data and basic knowledge of the model architecture, eliminating the need for retraining the target transformer. Technically, HPMI works by pruning the least important head and injecting a pre-trained malicious head to establish the backdoor. We provide a rigorous theoretical justification demonstrating that the implanted backdoor resists detection and removal by state-of-the-art defense techniques, under reasonable assumptions. Experimental evaluations across multiple datasets further validate the effectiveness of HPMI, showing that it 1) incurs negligible clean accuracy loss, 2) achieves at least 99.55% attack success rate, and 3) bypasses four advanced defense mechanisms. Additionally, relative to state-of-the-art retraining-dependent attacks, HPMI achieves greater concealment and robustness against diverse defense strategies, while maintaining minimal impact on clean accuracy.
Key Contributions
- HPMI: a retraining-free backdoor attack that prunes the least important attention head and injects a pre-trained single-head malicious transformer in its place, requiring no architecture change
- Theoretical analysis proving the implanted backdoor is resistant to detection and removal by state-of-the-art defenses under reasonable assumptions
- Empirical validation across CV and NLP datasets achieving ≥99.55% attack success rate with negligible clean accuracy loss while bypassing four advanced defense mechanisms
🛡️ Threat Analysis
HPMI is a backdoor attack that embeds hidden, trigger-activated malicious behavior in transformer models by pruning the least important attention head and replacing it with a pre-trained malicious head. The model behaves normally on clean inputs and activates the backdoor only when the trigger is present — the canonical ML10 threat. Although the paper motivates the attack via the third-party model supply chain scenario, the primary contribution is the backdoor injection technique itself (weight/head manipulation), not a supply chain compromise method, so ML06 does not apply per the guidelines.