defense 2025

Large Reasoning Models Learn Better Alignment from Flawed Thinking

ShengYun Peng 1,2, Eric Smith 1, Ivan Evtimov 1, Song Jiang 1, Pin-Yu Chen 3, Hongyuan Zhan 1, Haozhu Wang 1, Duen Horng Chau 2, Mahesh Pasupuleti 1, Jianfeng Chi 1

7 citations · 49 references · arXiv

α

Published on arXiv

2510.00938

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

RECAP substantially improves jailbreak robustness and safety while reducing overrefusal and preserving core reasoning capability, with RECAP-trained models exhibiting more frequent self-reflection and remaining robust under adaptive attacks.

RECAP (Robust Safety Alignment via Counter-Aligned Prefilling)

Novel technique introduced


Large reasoning models (LRMs) "think" by generating structured chain-of-thought (CoT) before producing a final answer, yet they still lack the ability to reason critically about safety alignment and are easily biased when a flawed premise is injected into their thought process. We propose RECAP (Robust Safety Alignment via Counter-Aligned Prefilling), a principled reinforcement learning (RL) method for post-training that explicitly teaches models to override flawed reasoning trajectories and reroute to safe and helpful responses. RECAP trains on a mixture of synthetically generated counter-aligned CoT prefills and standard prompts, requires no additional training cost or modifications beyond vanilla reinforcement learning from human feedback (RLHF), and substantially improves safety and jailbreak robustness, reduces overrefusal, and preserves core reasoning capability -- all while maintaining inference token budget. Extensive analysis shows that RECAP-trained models engage in self-reflection more frequently and remain robust under adaptive attacks, preserving safety even after repeated attempts to override their reasoning.


Key Contributions

  • Identifies a novel vulnerability in large reasoning models where injected flawed CoT prefills can bias the model into unsafe outputs
  • Proposes RECAP, an RL post-training method that trains on synthetic counter-aligned CoT prefills to teach models to override flawed reasoning and reroute to safe responses
  • Demonstrates improved jailbreak robustness, reduced overrefusal, and preserved reasoning capability with no additional training cost beyond vanilla RLHF

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timetraining_timegrey_box
Applications
large reasoning modelssafety alignmentchatbot