Beyond Surface Alignment: Rebuilding LLMs Safety Mechanism via Probabilistically Ablating Refusal Direction
Yuanbo Xie , Yingjie Zhang , Tianyun Liu , Duohe Ma , Tingwen Liu
Published on arXiv
2509.15202
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
DeepRefusal reduces attack success rates by approximately 95% across four open-source LLM families and six representative jailbreak attacks while maintaining model utility.
DeepRefusal
Novel technique introduced
Jailbreak attacks pose persistent threats to large language models (LLMs). Current safety alignment methods have attempted to address these issues, but they experience two significant limitations: insufficient safety alignment depth and unrobust internal defense mechanisms. These limitations make them vulnerable to adversarial attacks such as prefilling and refusal direction manipulation. We introduce DeepRefusal, a robust safety alignment framework that overcomes these issues. DeepRefusal forces the model to dynamically rebuild its refusal mechanisms from jailbreak states. This is achieved by probabilistically ablating the refusal direction across layers and token depths during fine-tuning. Our method not only defends against prefilling and refusal direction attacks but also demonstrates strong resilience against other unseen jailbreak strategies. Extensive evaluations on four open-source LLM families and six representative attacks show that DeepRefusal reduces attack success rates by approximately 95%, while maintaining model capabilities with minimal performance degradation.
Key Contributions
- Identifies two key weaknesses in existing LLM safety alignment: insufficient alignment depth (exploited by prefilling attacks) and unrobust internal defense mechanisms (exploited by refusal direction ablation attacks)
- Proposes DeepRefusal, which probabilistically ablates the refusal direction across multiple layers and token depths during fine-tuning to force the model to rebuild refusal behavior from simulated jailbreak states
- Demonstrates ~95% reduction in attack success rates across four LLM families and six representative attacks, including strong generalization to unseen jailbreak strategies