defense 2025

CorrSteer: Generation-Time LLM Steering via Correlated Sparse Autoencoder Features

Seonglae Cho 1,2, Zekun Wu 1,2, Adriano Koshiyama 1,2

0 citations

α

Published on arXiv

2508.12535

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves +27.2% improvement on HarmBench jailbreaking prevention with only 108 samples, and +3.3% on MMLU with 4000 samples, using correlation-based SAE feature steering at inference time.

CorrSteer

Novel technique introduced


Sparse Autoencoders (SAEs) can extract interpretable features from large language models (LLMs) without supervision. However, their effectiveness in downstream steering tasks is limited by the requirement for contrastive datasets or large activation storage. To address these limitations, we propose CorrSteer, which selects features by correlating sample correctness with SAE activations from generated tokens at inference time. This approach uses only inference-time activations to extract more relevant features, thereby reducing spurious correlations. It also obtains steering coefficients from average activations, automating the entire pipeline. Our method shows improved task performance on QA, bias mitigation, jailbreaking prevention, and reasoning benchmarks on Gemma-2 2B and LLaMA-3.1 8B, notably achieving a +3.3% improvement in MMLU performance with 4000 samples and a +27.2% improvement in HarmBench with only 108 samples. Selected features demonstrate semantically meaningful patterns aligned with each task's requirements, revealing the underlying capabilities that drive performance. Our work establishes correlation-based selection as an effective and scalable approach for automated SAE steering across language model applications.


Key Contributions

  • Correlation-based SAE feature selection using inference-time activations only, eliminating the need for contrastive datasets or large activation storage
  • Automated steering pipeline that derives steering coefficients from average activations, requiring as few as 108 samples for jailbreak prevention
  • Demonstrated effectiveness across QA (MMLU +3.3%), bias mitigation, jailbreak prevention (HarmBench +27.2%), and reasoning on Gemma-2 2B and LLaMA-3.1 8B

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Datasets
MMLUHarmBench
Applications
jailbreak preventionquestion answeringbias mitigationreasoning