α

Published on arXiv

2602.00707

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Self-Guard bridges the awareness-compliance gap in LRMs, achieving robust safety performance without compromising model utility and generalizing across unseen risk categories and model scales

Self-Guard

Novel technique introduced


The emergence of Large Reasoning Models (LRMs) introduces a new paradigm of explicit reasoning, enabling remarkable advances yet posing unique risks such as reasoning manipulation and information leakage. To mitigate these risks, current alignment strategies predominantly rely on heavy post-training paradigms or external interventions. However, these approaches are often computationally intensive and fail to address the inherent awareness-compliance gap, a critical misalignment where models recognize potential risks yet prioritize following user instructions due to their sycophantic tendencies. To address these limitations, we propose Self-Guard, a lightweight safety defense framework that reinforces safety compliance at the representational level. Self-Guard operates through two principal stages: (1) safety-oriented prompting, which activates the model's latent safety awareness to evoke spontaneous reflection, and (2) safety activation steering, which extracts the resulting directional shift in the hidden state space and amplifies it to ensure that safety compliance prevails over sycophancy during inference. Experiments demonstrate that Self-Guard effectively bridges the awareness-compliance gap, achieving robust safety performance without compromising model utility. Furthermore, Self-Guard exhibits strong generalization across diverse unseen risks and varying model scales, offering a cost-efficient solution for LRM safety alignment.


Key Contributions

  • Identifies and formalizes the 'awareness-compliance gap' in Large Reasoning Models — where models recognize harmful requests but comply anyway due to sycophantic tendencies
  • Proposes safety-oriented prompting to activate latent safety awareness and elicit spontaneous reflective refusal behavior
  • Introduces safety activation steering, which extracts directional shifts in hidden states and amplifies them at inference time to enforce safety compliance without post-training

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_time
Applications
llm safety alignmentlarge reasoning model jailbreak defensechatbot safety