Contrastive Reasoning Alignment: Reinforcement Learning from Hidden Representations
Haozheng Luo 1, Yimin Wang 2, Jiahao Yu 1, Binghui Wang 3, Yan Chen 1
Published on arXiv
2603.17305
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves 79.0% average improvement in reasoning safety and 87.7% improvement in final-response safety over base models, outperforming IPO and SafeKey defenses
CRAFT
Novel technique introduced
We propose CRAFT, a red-teaming alignment framework that leverages model reasoning capabilities and hidden representations to improve robustness against jailbreak attacks. Unlike prior defenses that operate primarily at the output level, CRAFT aligns large reasoning models to generate safety-aware reasoning traces by explicitly optimizing objectives defined over the hidden state space. Methodologically, CRAFT integrates contrastive representation learning with reinforcement learning to separate safe and unsafe reasoning trajectories, yielding a latent-space geometry that supports robust, reasoning-level safety alignment. Theoretically, we show that incorporating latent-textual consistency into GRPO eliminates superficially aligned policies by ruling them out as local optima. Empirically, we evaluate CRAFT on multiple safety benchmarks using two strong reasoning models, Qwen3-4B-Thinking and R1-Distill-Llama-8B, where it consistently outperforms state-of-the-art defenses such as IPO and SafeKey. Notably, CRAFT delivers an average 79.0% improvement in reasoning safety and 87.7% improvement in final-response safety over the base models, demonstrating the effectiveness of hidden-space reasoning alignment.
Key Contributions
- CRAFT framework that aligns reasoning models at the hidden representation level rather than output level
- Integration of contrastive representation learning with reinforcement learning (GRPO) to separate safe and unsafe reasoning trajectories
- Theoretical proof that latent-textual consistency eliminates superficially aligned policies as local optima