defense 2026

Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models

San Kim , Gary Geunbae Lee

0 citations · 52 references · arXiv

α

Published on arXiv

2601.04448

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

MB-Defense substantially lowers attack success rates across diverse trigger and behavior settings while preserving instruction-following ability, using as few as 128 clean samples.

MB-Defense

Novel technique introduced


Large Language Models (LLMs) have greatly advanced Natural Language Processing (NLP), particularly through instruction tuning, which enables broad task generalization without additional fine-tuning. However, their reliance on large-scale datasets-often collected from human or web sources-makes them vulnerable to backdoor attacks, where adversaries poison a small subset of data to implant hidden behaviors. Despite this growing risk, defenses for instruction-tuned models remain underexplored. We propose MB-Defense (Merging & Breaking Defense Framework), a novel training pipeline that immunizes instruction-tuned LLMs against diverse backdoor threats. MB-Defense comprises two stages: (i) defensive poisoning, which merges attacker and defensive triggers into a unified backdoor representation, and (ii) weight recovery, which breaks this representation through additional training to restore clean behavior. Extensive experiments across multiple LLMs show that MB-Defense substantially lowers attack success rates while preserving instruction-following ability. Our method offers a generalizable and data-efficient defense strategy, improving the robustness of instruction-tuned LLMs against unseen backdoor attacks.


Key Contributions

  • MB-Defense two-stage training pipeline combining defensive poisoning (merging attacker and defender triggers into a unified backdoor representation) and weight recovery (disrupting that representation to restore clean behavior)
  • Data-efficient defense requiring only ~128 clean samples to achieve effective backdoor neutralization without prior knowledge of attack patterns
  • Empirical analysis of backdoor vulnerability across model architectures, scales, and trigger types in instruction-tuned LLMs

🛡️ Threat Analysis

Model Poisoning

Paper's primary contribution is MB-Defense, a two-stage pipeline to neutralize backdoor trojans (trigger-behavior pairs) embedded in instruction-tuned LLMs via data poisoning — directly targeting the backdoor/trojan threat.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timetargeted
Applications
instruction tuningtext generationnatural language generation