attack 2025

The Rogue Scalpel: Activation Steering Compromises LLM Safety

Anton Korznikov , Andrey Galichin , Alexey Dontsov , Oleg Y. Rogov , Ivan Oseledets , Elena Tutubalina

4 citations · 60 references · arXiv

α

Published on arXiv

2509.22067

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Combining 20 randomly sampled steering vectors that jailbreak a single prompt creates a universal attack that substantially increases harmful compliance on unseen harmful requests across Llama-3 and Qwen2.5 families.

Activation Steering Jailbreak

Novel technique introduced


Activation steering is a promising technique for controlling LLM behavior by adding semantically meaningful vectors directly into a model's hidden states during inference. It is often framed as a precise, interpretable, and potentially safer alternative to fine-tuning. We demonstrate the opposite: steering systematically breaks model alignment safeguards, making it comply with harmful requests. Through extensive experiments on different model families, we show that even steering in a random direction can increase the probability of harmful compliance from 0% to 1-13%. Alarmingly, steering benign features from a sparse autoencoder (SAE), a common source of interpretable directions, demonstrates a comparable harmful potential. Finally, we show that combining 20 randomly sampled vectors that jailbreak a single prompt creates a universal attack, significantly increasing harmful compliance on unseen requests. These results challenge the paradigm of safety through interpretability, showing that precise control over model internals does not guarantee precise control over model behavior.


Key Contributions

  • Demonstrates that activation steering—even with random directions—systematically breaks LLM safety alignment, increasing harmful compliance from 0% to 1–13% across multiple model families.
  • Shows that steering benign interpretable features from sparse autoencoders (SAEs) poses comparable jailbreak risk to adversarial steering vectors.
  • Constructs a universal jailbreak attack by combining 20 randomly sampled single-prompt jailbreak vectors, achieving significantly increased harmful compliance on unseen requests.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Datasets
JailbreakBench
Applications
llm safety alignmentchat assistants