attack 2025

Attack via Overfitting: 10-shot Benign Fine-tuning to Jailbreak LLMs

Zhixin Xie , Xurui Song , Jun Luo

5 citations · 57 references · arXiv

α

Published on arXiv

2510.02833

Transfer Learning Attack

OWASP ML Top 10 — ML07

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

LLMs can be fully jailbroken using only 10 benign QA pairs through a two-stage overfitting process, outperforming all 5 baselines in attack success rate and evading moderation models that detect harmful fine-tuning data.

tenBenign

Novel technique introduced


Despite substantial efforts in safety alignment, recent research indicates that Large Language Models (LLMs) remain highly susceptible to jailbreak attacks. Among these attacks, finetuning-based ones that compromise LLMs' safety alignment via fine-tuning stand out due to its stable jailbreak performance. In particular, a recent study indicates that fine-tuning with as few as 10 harmful question-answer (QA) pairs can lead to successful jailbreaking across various harmful questions. However, such malicious fine-tuning attacks are readily detectable and hence thwarted by moderation models. In this paper, we demonstrate that LLMs can be jailbroken by fine-tuning with only 10 benign QA pairs; our attack exploits the increased sensitivity of LLMs to fine-tuning data after being overfitted. Specifically, our fine-tuning process starts with overfitting an LLM via fine-tuning with benign QA pairs involving identical refusal answers. Further fine-tuning is then performed with standard benign answers, causing the overfitted LLM to forget the refusal attitude and thus provide compliant answers regardless of the harmfulness of a question. We implement our attack on the ten LLMs and compare it with five existing baselines. Experiments demonstrate that our method achieves significant advantages in both attack effectiveness and attack stealth. Our findings expose previously unreported security vulnerabilities in current LLMs and provide a new perspective on understanding how LLMs' security is compromised, even with benign fine-tuning. Our code is available at https://github.com/ZHIXINXIE/tenBenign.


Key Contributions

  • Two-stage overfitting-based fine-tuning attack that jailbreaks LLMs using only 10 completely benign QA pairs, achieving high attack success rates while evading moderation models.
  • Mechanistic insight that overfitting LLMs on refusal responses dramatically increases their susceptibility to subsequent safety-alignment erasure via standard benign fine-tuning.
  • Empirical comparison against 5 baselines across 10 LLMs, demonstrating superior attack effectiveness and stealth over prior fine-tuning-based jailbreak methods including AOA.

🛡️ Threat Analysis

Transfer Learning Attack

The attack's core mechanism is exploiting fine-tuning dynamics — specifically, overfitting during fine-tuning to destabilize safety alignment — which directly falls under 'Attacks exploiting the gap between pre-training and fine-tuning distributions' and 'RLHF/preference manipulation to embed malicious behavior'.


Details

Domains
nlp
Model Types
llm
Threat Tags
training_timegrey_boxtargeted
Applications
llm safety alignmentfine-tuning-as-a-service