defense 2026

RAPO: Risk-Aware Preference Optimization for Generalizable Safe Reasoning

Zeming Wei 1,2, Qiaosheng Zhang 1, Xia Hu 1, Xingcheng Xu 1

0 citations · 48 references · arXiv (Cornell University)

α

Published on arXiv

2602.04224

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

RAPO successfully generalizes safe reasoning across multiple LRMs against diverse jailbreak attack prompts while maintaining general model utility.

RAPO (Risk-Aware Preference Optimization)

Novel technique introduced


Large Reasoning Models (LRMs) have achieved tremendous success with their chain-of-thought (CoT) reasoning, yet also face safety issues similar to those of basic language models. In particular, while algorithms are designed to guide them to deliberately refuse harmful prompts with safe reasoning, this process often fails to generalize against diverse and complex jailbreak attacks. In this work, we attribute these failures to the generalization of the safe reasoning process, particularly their insufficiency against complex attack prompts. We provide both theoretical and empirical evidence to show the necessity of a more sufficient safe reasoning process to defend against advanced attack prompts. Building on this insight, we propose a Risk-Aware Preference Optimization (RAPO) framework that enables LRM to adaptively identify and address the safety risks with appropriate granularity in its thinking content. Extensive experiments demonstrate that RAPO successfully generalizes multiple LRMs' safe reasoning adaptively across diverse attack prompts whilst preserving general utility, contributing a robust alignment technique for LRM safety. Our code is available at https://github.com/weizeming/RAPO.


Key Contributions

  • Theoretical and empirical analysis showing that insufficient safe reasoning in LRMs is the root cause of generalization failure against complex jailbreak attacks
  • RAPO framework that enables LRMs to adaptively identify and address safety risks at appropriate granularity within chain-of-thought thinking content
  • Demonstrated generalization of safe reasoning across diverse attack prompts for multiple LRMs while preserving general utility

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_boxtargeted
Applications
llm safety alignmentchatbot safetyharmful content prevention