defense 2025

Refusal Steering: Fine-grained Control over LLM Refusal Behaviour for Sensitive Topics

Iker García-Ferrero , David Montero , Roman Orus

0 citations · 47 references · arXiv

α

Published on arXiv

2512.16602

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Activation steering on Qwen3-Next-80B-A3B-Thinking nearly eliminates political topic refusals while maintaining safety alignment on JailbreakBench and preserving general benchmark performance across 4B and 80B model sizes.

Refusal Steering

Novel technique introduced


We introduce Refusal Steering, an inference-time method to exercise fine-grained control over Large Language Models refusal behaviour on politically sensitive topics without retraining. We replace fragile pattern-based refusal detection with an LLM-as-a-judge that assigns refusal confidence scores and we propose a ridge-regularized variant to compute steering vectors that better isolate the refusal--compliance direction. On Qwen3-Next-80B-A3B-Thinking, our method removes the refusal behaviour of the model around politically sensitive topics while maintaining safety on JailbreakBench and near-baseline performance on general benchmarks. The approach generalizes across 4B and 80B models and can also induce targeted refusals when desired. We analize the steering vectors and show that refusal signals concentrate in deeper layers of the transformer and are distributed across many dimensions. Together, these results demonstrate that activation steering can remove political refusal behaviour while retaining safety alignment for harmful content, offering a practical path to controllable, transparent moderation at inference time.


Key Contributions

  • LLM-as-a-judge framework for computing continuous refusal confidence scores, replacing brittle pattern-based refusal detection
  • Ridge-regularized steering vector variants that better isolate the refusal–compliance direction in hidden activation space
  • Multi-layer steering with automatic hyperparameter selection that removes political refusals while preserving safety alignment on harmful topics and near-baseline general benchmark performance

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_time
Datasets
JailbreakBench
Applications
llm safety moderationpolitically sensitive topic handlingcontrollable refusal behavior