SLIP: Soft Label Mechanism and Key-Extraction-Guided CoT-based Defense Against Instruction Backdoor in APIs
Zhengxian Wu , Juan Wen , Wanli Peng , Haowei Chang , Yinghan Zhou , Yiming Xue
Published on arXiv
2508.06153
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
SLIP reduces average attack success rate from 90.2% to 25.13% and improves clean accuracy to 87.15%, outperforming state-of-the-art black-box defenses across classification and QA tasks.
SLIP
Novel technique introduced
With the development of customized large language model (LLM) agents, a new threat of black-box backdoor attacks has emerged, where malicious instructions are injected into hidden system prompts. These attacks easily bypass existing defenses that rely on white-box access, posing a serious security challenge. To address this, we propose SLIP, a Soft Label mechanism and key-extraction-guided CoT-based defense against Instruction backdoors in APIs. SLIP is designed based on two key insights. First, to counteract the model's oversensitivity to triggers, we propose a Key-extraction-guided Chain-of-Thought (KCoT). Instead of only considering the single trigger or the input sentence, KCoT prompts the agent to extract task-relevant key phrases. Second, to guide the LLM toward correct answers, our proposed Soft Label Mechanism (SLM) prompts the agent to quantify the semantic correlation between key phrases and candidate answers. Crucially, to mitigate the influence of residual triggers or misleading content in phrases extracted by KCoT, which typically causes anomalous scores, SLM excludes anomalous scores deviating significantly from the mean and subsequently averages the remaining scores to derive a more reliable semantic representation. Extensive experiments on classification and question-answer (QA) tasks demonstrate that SLIP is highly effective, reducing the average attack success rate (ASR) from 90.2% to 25.13% while maintaining high accuracy on clean data and outperforming state-of-the-art defenses. Our code are available in https://github.com/CAU-ISS-Lab/Backdoor-Attack-Defense-LLMs/tree/main/SLIP.
Key Contributions
- Mechanistic analysis identifying 'cognitive override' (trigger suppresses genuine semantic reasoning) and 'abnormal semantic correlation' (trigger assigned inflated correlation scores toward target labels) as the two core phenomena enabling instruction backdoor attacks.
- Key-extraction-guided Chain-of-Thought (KCoT) that directs the model to extract task-relevant key phrases rather than fixating on the trigger, counteracting cognitive override.
- Soft Label Mechanism (SLM) that quantifies phrase-label semantic correlations, uses statistical clustering to filter anomalous (trigger-contaminated) scores, and averages the remaining scores to produce a reliable prediction.
🛡️ Threat Analysis
The paper's primary contribution is a defense against instruction backdoor attacks where malicious triggers are embedded in hidden system prompts, causing LLMs to produce attacker-specified outputs — the canonical backdoor/trojan threat model. SLIP reduces average ASR from 90.2% to 25.13%.