defense 2026

STAR-S: Improving Safety Alignment through Self-Taught Reasoning on Safety Rules

Di Wu , Yanyan Zhao , Xin Lu , Mingzhe Li , Bing Qin

1 citations · 71 references · arXiv

α

Published on arXiv

2601.03537

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

STAR-S outperforms safety alignment baselines on six jailbreak attack benchmarks while achieving better over-refusal balance and without significantly degrading general capabilities.

STAR-S

Novel technique introduced


Defending against jailbreak attacks is crucial for the safe deployment of Large Language Models (LLMs). Recent research has attempted to improve safety by training models to reason over safety rules before responding. However, a key issue lies in determining what form of safety reasoning effectively defends against jailbreak attacks, which is difficult to explicitly design or directly obtain. To address this, we propose \textbf{STAR-S} (\textbf{S}elf-\textbf{TA}ught \textbf{R}easoning based on \textbf{S}afety rules), a framework that integrates the learning of safety rule reasoning into a self-taught loop. The core of STAR-S involves eliciting reasoning and reflection guided by safety rules, then leveraging fine-tuning to enhance safety reasoning. Repeating this process creates a synergistic cycle. Improvements in the model's reasoning and interpretation of safety rules allow it to produce better reasoning data under safety rule prompts, which is then utilized for further training. Experiments show that STAR-S effectively defends against jailbreak attacks, outperforming baselines. Code is available at: https://github.com/pikepokenew/STAR_S.git.


Key Contributions

  • STAR-S: an iterative self-taught framework that bootstraps safety reasoning by prompting the model with safety rules, filtering successful reasoning, and fine-tuning — repeating to create a synergistic improvement cycle.
  • Introduction of a 'flawed reasoning prefix' technique that forces the model to detect and correct unsafe reasoning trajectories, improving reflection depth and safety rule interpretation.
  • Empirical demonstration that STAR-S outperforms baselines across six jailbreak attack benchmarks while maintaining general capability and reducing over-refusal.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Datasets
jailbreak attack benchmarks (6)over-refusal benchmarks (2)
Applications
llm safety alignmentchatbot safetyjailbreak defense