defense 2026

Alignment-Weighted DPO: A principled reasoning approach to improve safety alignment

Mengxuan Hu 1,2, Vivek V. Datla 2, Anoop Kumar 2, Zihan Guan 1, Sheng Li 1, Alfy Samuel 2, Daben Liu 2

0 citations · 61 references · arXiv (Cornell University)

α

Published on arXiv

2602.21346

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Alignment-Weighted DPO with CoT fine-tuning consistently improves robustness to diverse jailbreak strategies while maintaining overall model utility across multiple safety and utility benchmarks.

Alignment-Weighted DPO

Novel technique introduced


Recent advances in alignment techniques such as Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and Direct Preference Optimization (DPO) have improved the safety of large language models (LLMs). However, these LLMs remain vulnerable to jailbreak attacks that disguise harmful intent through indirect or deceptive phrasing. Using causal intervention, we empirically demonstrate that this vulnerability stems from shallow alignment mechanisms that lack deep reasoning, often rejecting harmful prompts without truly understanding why they are harmful. To mitigate this vulnerability, we propose enhancing alignment through reasoning-aware post-training. We construct and release a novel Chain-of-Thought (CoT) fine-tuning dataset that includes both utility-oriented and safety-critical prompts with step-by-step rationales. Fine-tuning on this dataset encourages models to produce principled refusals grounded in reasoning, outperforming standard SFT baselines. Furthermore, inspired by failure patterns in CoT fine-tuning, we introduce Alignment-Weighted DPO, which targets the most problematic parts of an output by assigning different preference weights to the reasoning and final-answer segments. This produces finer-grained, targeted updates than vanilla DPO and improves robustness to diverse jailbreak strategies. Extensive experiments across multiple safety and utility benchmarks show that our method consistently improves alignment robustness while maintaining overall model utility.


Key Contributions

  • Causal intervention analysis demonstrating that jailbreak vulnerability stems from shallow alignment lacking genuine reasoning about why prompts are harmful
  • Novel Chain-of-Thought fine-tuning dataset with safety-critical prompts and step-by-step refusal rationales that outperforms standard SFT baselines
  • Alignment-Weighted DPO that assigns differential preference weights to reasoning vs. final-answer output segments for finer-grained safety training

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Applications
llm safety alignmentchatbot safety