defense 2025

Anchoring Refusal Direction: Mitigating Safety Risks in Tuning via Projection Constraint

Yanrui Du 1, Fenglei Fan 2, Sendong Zhao 1, Jiawei Cao 1, Qika Lin 3, Kai He 3, Ting Liu 1, Bing Qin 1, Mengling Feng 3

0 citations

α

Published on arXiv

2509.06795

Transfer Learning Attack

OWASP ML Top 10 — ML07

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

ProCon^wu_safe significantly mitigates IFT-induced safety degradation across multiple LLMs and scenarios while preserving task performance gains, consistently outperforming strong safety baselines.

ProCon (ProCon^wu_safe)

Novel technique introduced


Instruction Fine-Tuning (IFT) has been widely adopted as an effective post-training strategy to enhance various abilities of Large Language Models (LLMs). However, prior studies have shown that IFT can significantly compromise LLMs' safety, particularly their ability to refuse malicious instructions, raising significant concerns. Recent research into the internal mechanisms of LLMs has identified the refusal direction (r-direction) in the hidden states, which plays a pivotal role in governing refusal behavior. Building on this insight, our study reveals that the r-direction tends to drift during training, which we identify as one of the causes of the associated safety risks. To mitigate such drift, our proposed ProCon method introduces a projection-constrained loss term that regularizes the projection magnitude of each training sample's hidden state onto the r-direction. Our initial analysis shows that applying an appropriate constraint can effectively mitigate the refusal direction drift and associated safety risks, but remains limited by overall performance barriers. To overcome this barrier, informed by our observation of early-stage sharp drift and a data-driven perspective, we introduce a warm-up strategy that emphasizes early-stage strong constraints and broaden the data distribution to strengthen constraint signals, leading to an enhanced ProCon method. Experimental results under various datasets, scenarios, and LLMs demonstrate that our method can significantly mitigate safety risks posed by IFT while preserving task performance gains. Even compared with strong baselines, our method consistently delivers superior overall performance. Crucially, our analysis indicates that ProCon can contribute to stabilizing the r-direction during training, while such an interpretability-driven exploration of LLMs' internal mechanisms lays a solid foundation for future safety research.


Key Contributions

  • Identifies refusal direction (r-direction) drift in LLM hidden states during IFT as a key cause of safety degradation
  • Proposes ProCon, a projection-constrained loss term that regularizes hidden state projections onto the r-direction to anchor safety behavior during fine-tuning
  • Introduces a warm-up strategy and safety-data augmentation to overcome performance barriers, yielding the enhanced ProCon^wu_safe method that outperforms strong baselines across diverse datasets and LLMs

🛡️ Threat Analysis

Transfer Learning Attack

The paper directly addresses how instruction fine-tuning (a transfer learning process) degrades LLM safety — specifically causing drift in the refusal direction — and proposes ProCon as a regularization defense applied during fine-tuning. The attack vector is the fine-tuning process itself, and the defense constrains the adaptation to preserve pre-trained safety properties.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_time
Applications
llm safety alignmentinstruction fine-tuning