defense 2026

How Does the Thinking Step Influence Model Safety? An Entropy-based Safety Reminder for LRMs

Su-Hyeon Kim , Hyundong Jin , Yejin Lee , Yo-Sub Han

0 citations · 40 references · arXiv

α

Published on arXiv

2601.03662

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SafeRemind achieves up to 45.5 percentage points improvement in LlamaGuard3 safety scores across five LRMs and six benchmarks without any parameter updates.

SafeRemind

Novel technique introduced


Large Reasoning Models (LRMs) achieve remarkable success through explicit thinking steps, yet the thinking steps introduce a novel risk by potentially amplifying unsafe behaviors. Despite this vulnerability, conventional defense mechanisms remain ineffective as they overlook the unique reasoning dynamics of LRMs. In this work, we find that the emergence of safe-reminding phrases within thinking steps plays a pivotal role in ensuring LRM safety. Motivated by this finding, we propose SafeRemind, a decoding-time defense method that dynamically injects safe-reminding phrases into thinking steps. By leveraging entropy triggers to intervene at decision-locking points, SafeRemind redirects potentially harmful trajectories toward safer outcomes without requiring any parameter updates. Extensive evaluations across five LRMs and six benchmarks demonstrate that SafeRemind substantially enhances safety, achieving improvements of up to 45.5%p while preserving core reasoning utility.


Key Contributions

  • Identifies that safe-reminding phrases within thinking steps are pivotal for LRM safety, and that entropy patterns predict 'decision-locking' points where unsafe trajectories solidify
  • Proposes SafeRemind, a training-free decoding-time defense that dynamically injects safe-reminding phrases at entropy-triggered intervention points during LRM thinking steps
  • Demonstrates up to 45.5%p safety improvement across five LRMs and six benchmarks without degrading reasoning utility or requiring parameter updates

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxwhite_boxinference_time
Datasets
LlamaGuard3 benchmark suite
Applications
large reasoning modelschatbot safetyjailbreak defense