attack 2026

LLMs Can Unlearn Refusal with Only 1,000 Benign Samples

Yangyang Guo 1, Ziwei Xu 1, Si Liu 2, Zhiming Zheng 2, Mohan Kankanhalli 1

0 citations · 54 references · arXiv

α

Published on arXiv

2601.19231

Transfer Learning Attack

OWASP ML Top 10 — ML07

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Fine-tuning with just 1,000 benign samples prepended with refusal prefixes substantially degrades safety alignment across all 16 tested LLMs, suggesting safety relies on token sequence memorization rather than reasoning.

Refusal Unlearning

Novel technique introduced


This study reveals a previously unexplored vulnerability in the safety alignment of Large Language Models (LLMs). Existing aligned LLMs predominantly respond to unsafe queries with refusals, which often begin with a fixed set of prefixes (I'm sorry). We demonstrate that this rigid refusal pattern is a vulnerability and introduce a novel \textbf{refusal unlearning} technique that exploits it. Specifically, we fine-tune LLMs using merely 1,000 benign samples, where each response is prepended with a refusal prefix. The underlying intuition is to disrupt the refusal completion pathway, thereby driving the model to forget how to refuse while following harmful instructions. This intuition is further supported by theoretical proofs. We apply this approach to a total of 16 LLMs, including various open-source models from Llama, Qwen, and Gemma families, as well as closed-source models such as Gemini and GPT. Experimental results show that the safety scores of previously aligned LLMs degrade both consistently and substantially. Importantly, we verify that the observed gain cannot be attributed to plain fine-tuning or random prefix effects. Our findings suggest that current safety alignment may rely heavily on token sequence memorization rather than reasoning, motivating future work beyond simple refusal mechanisms. Code has been released: https://github.com/guoyang9/refusal-unlearning.


Key Contributions

  • Identifies rigid refusal prefix patterns (e.g., 'I'm sorry') as a structural vulnerability in LLM safety alignment
  • Proposes 'refusal unlearning': fine-tuning on only 1,000 benign samples prepended with refusal prefixes to disrupt the refusal completion pathway
  • Demonstrates consistent and substantial safety degradation across 16 LLMs (Llama, Qwen, Gemma, Gemini, GPT) with theoretical justification

🛡️ Threat Analysis

Transfer Learning Attack

The attack specifically exploits the fine-tuning process to unlearn RLHF-instilled safety behaviors — it attacks the gap between safety alignment training and subsequent fine-tuning, causing the model to forget refusal behavior.


Details

Domains
nlp
Model Types
llm
Threat Tags
white_boxtraining_time
Datasets
AdvBench
Applications
llm safety alignmentjailbreaking