MetaDefense: Defending Finetuning-based Jailbreak Attack Before and During Generation
Weisen Jiang , Sinno Jialin Pan
Published on arXiv
2510.07835
Transfer Learning Attack
OWASP ML Top 10 — ML07
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
MetaDefense significantly outperforms existing defenses against both seen and unseen jailbreak attack templates across LLaMA-2-7B, Qwen-2.5-3B-Instruct, and LLaMA-3.2-3B-Instruct while maintaining competitive benign task performance.
MetaDefense
Novel technique introduced
This paper introduces MetaDefense, a novel framework for defending against finetuning-based jailbreak attacks in large language models (LLMs). We observe that existing defense mechanisms fail to generalize to harmful queries disguised by unseen attack templates, despite LLMs being capable of distinguishing disguised harmful queries in the embedding space. Based on these insights, we propose a two-stage defense approach: (i) pre-generation defense that detects harmful queries before response generation begins, and (ii) mid-generation defense that monitors partial responses during generation to prevent outputting more harmful content. Our MetaDefense trains the LLM to predict the harmfulness of both queries and partial responses using specialized prompts, enabling early termination of potentially harmful interactions. Extensive experiments across multiple LLM architectures (LLaMA-2-7B, Qwen-2.5-3B-Instruct, and LLaMA-3.2-3B-Instruct) demonstrate that MetaDefense significantly outperforms existing defense mechanisms, achieving robust defense against harmful queries with seen and unseen attack templates while maintaining competitive performance on benign tasks. Code is available at https://github.com/ws-jiang/MetaDefense.
Key Contributions
- Identifies that existing defenses fail to generalize to unseen attack templates despite LLMs' ability to distinguish harmful queries in the embedding space
- Proposes pre-generation defense that detects harmful queries via specialized prompts before response generation begins
- Proposes mid-generation defense that monitors partial responses during generation to terminate harmful interactions early
🛡️ Threat Analysis
The threat being defended against — finetuning-based jailbreak attacks (FJAttacks) — directly exploits the fine-tuning process to remove safety alignment from LLMs, fitting ML07's definition of attacks that exploit fine-tuning/RLHF. The defense is specifically engineered to counter this fine-tuning exploitation angle.