defense 2025

AdvChain: Adversarial Chain-of-Thought Tuning for Robust Safety Alignment of Large Reasoning Models

Zihao Zhu 1, Xinyu Wu 1, Gehan Hu 2, Siwei Lyu 3, Ke Xu 1, Baoyuan Wu 1

2 citations · 33 references · arXiv

α

Published on arXiv

2509.24269

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

AdvChain significantly enhances robustness against jailbreak attacks and CoT hijacking while reducing over-refusal, matching models trained on 15× more data with high data efficiency.

AdvChain

Novel technique introduced


Large Reasoning Models (LRMs) have demonstrated remarkable capabilities in complex problem-solving through Chain-of-Thought (CoT) reasoning. However, the multi-step nature of CoT introduces new safety challenges that extend beyond conventional language model alignment. We identify a failure mode in current safety CoT tuning methods: the \textit{snowball effect}, where minor reasoning deviations progressively amplify throughout the thought process, leading to either harmful compliance or excessive refusal. This effect stems from models being trained to imitate perfect reasoning scripts without learning to self-correct. To address this limitation, we propose AdvChain, an alignment paradigm that teaches models dynamic self-correction through adversarial CoT tuning. Our method involves constructing a dataset containing Temptation-Correction and Hesitation-Correction samples, where models learn to recover from harmful reasoning drifts and unnecessary cautions. Extensive experiments show that AdvChain significantly enhances robustness against jailbreak attacks and CoT hijacking while substantially reducing over-refusal on benign prompts, achieving a superior safety-utility balance without compromising reasoning capabilities. Our work establishes a new direction for building more robust and reliable reasoning models.


Key Contributions

  • Identifies and empirically validates the 'Snowball Effect' in CoT safety alignment, where minor reasoning deviations progressively amplify into harmful compliance or excessive over-refusal
  • Proposes AdvChain, an adversarial CoT tuning paradigm that trains models on intentionally flawed-then-corrected reasoning trajectories (Temptation-Correction and Hesitation-Correction samples)
  • Achieves superior jailbreak robustness and reduced over-refusal with data efficiency comparable to models trained on 15× more data, without degrading core reasoning capabilities

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timetraining_time
Applications
llm safety alignmentlarge reasoning modelschain-of-thought reasoning