attack 2025

Steering in the Shadows: Causal Amplification for Activation Space Attacks in Large Language Models

Zhiyuan Xu , Stanislav Abaimov , Joseph Gardiner , Sana Belguith

0 citations · 50 references · arXiv

α

Published on arXiv

2511.17194

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SSS induces strong behavioral shifts across four harmful axes on multiple open-weight LLMs while maintaining output coherence and general capabilities, outperforming prior activation steering baselines in both attack efficacy and stealth against conventional audits.

Sensitivity-Scaled Steering (SSS)

Novel technique introduced


Modern large language models (LLMs) are typically secured by auditing data, prompts, and refusal policies, while treating the forward pass as an implementation detail. We show that intermediate activations in decoder-only LLMs form a vulnerable attack surface for behavioral control. Building on recent findings on attention sinks and compression valleys, we identify a high-gain region in the residual stream where small, well-aligned perturbations are causally amplified along the autoregressive trajectory--a Causal Amplification Effect (CAE). We exploit this as an attack surface via Sensitivity-Scaled Steering (SSS), a progressive activation-level attack that combines beginning-of-sequence (BOS) anchoring with sensitivity-based reinforcement to focus a limited perturbation budget on the most vulnerable layers and tokens. We show that across multiple open-weight models and four behavioral axes, SSS induces large shifts in evil, hallucination, sycophancy, and sentiment while preserving high coherence and general capabilities, turning activation steering into a concrete security concern for white-box and supply-chain LLM deployments.


Key Contributions

  • Identifies a high-gain frontier in the LLM residual stream (BOS attention sinks + compression valleys) and formalizes the Causal Amplification Effect (CAE) as a mechanism for gradual autoregressive behavioral drift from small activation perturbations.
  • Proposes Sensitivity-Scaled Steering (SSS), a two-stage white-box activation attack combining BOS anchoring with adaptive sensitivity-based reinforcement to maximize behavioral shift under a constrained perturbation budget without modifying weights or prompts.
  • Demonstrates that SSS reliably steers four open-weight LLMs across four behavioral axes (evil outputs, hallucination, sycophancy, sentiment) while preserving coherence and outperforming existing activation steering baselines in both efficacy and stealth.

🛡️ Threat Analysis

Input Manipulation Attack

SSS is a white-box adversarial perturbation attack applied at inference time — it uses sensitivity/gradient-based analysis to craft perturbations (analogous to adversarial examples) targeting intermediate activations rather than inputs, causing the model to produce incorrect/harmful outputs. The methodology mirrors adversarial attack frameworks (sensitivity scaling, perturbation budget, gradient alignment) applied to activation space.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeteddigital
Applications
llm serving infrastructurechatbotai assistantsupply-chain llm deployment