defense 2026

JPU: Bridging Jailbreak Defense and Unlearning via On-Policy Path Rectification

Xi Wang , Songlei Jian , Shasha Li , Xiaopeng Li , Zhaoye Li , Bin Ji , Baosheng Wang , Jie Yu

0 citations · 45 references · arXiv

α

Published on arXiv

2601.03005

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

JPU significantly enhances jailbreak resistance against dynamic state-of-the-art attacks while preserving general model utility, outperforming existing machine unlearning defenses.

JPU (Jailbreak Path Unlearning)

Novel technique introduced


Despite extensive safety alignment, Large Language Models (LLMs) often fail against jailbreak attacks. While machine unlearning has emerged as a promising defense by erasing specific harmful parameters, current methods remain vulnerable to diverse jailbreaks. We first conduct an empirical study and discover that this failure mechanism is caused by jailbreaks primarily activating non-erased parameters in the intermediate layers. Further, by probing the underlying mechanism through which these circumvented parameters reassemble into the prohibited output, we verify the persistent existence of dynamic $\textbf{jailbreak paths}$ and show that the inability to rectify them constitutes the fundamental gap in existing unlearning defenses. To bridge this gap, we propose $\textbf{J}$ailbreak $\textbf{P}$ath $\textbf{U}$nlearning (JPU), which is the first to rectify dynamic jailbreak paths towards safety anchors by dynamically mining on-policy adversarial samples to expose vulnerabilities and identify jailbreak paths. Extensive experiments demonstrate that JPU significantly enhances jailbreak resistance against dynamic attacks while preserving the model's utility.


Key Contributions

  • Empirically identifies that jailbreaks bypass unlearning defenses by activating non-erased intermediate-layer parameters, establishing 'jailbreak paths' as the root failure mechanism.
  • Proposes JPU (Jailbreak Path Unlearning), the first unlearning framework that dynamically mines on-policy adversarial samples to expose vulnerabilities and rectify jailbreak paths toward safety anchors.
  • Demonstrates that JPU significantly improves jailbreak resistance against dynamic/diverse attacks while preserving general model utility.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timewhite_boxblack_box
Datasets
AdvBench
Applications
llm safetychatbot safetyharmful content prevention