defense 2026

Understanding and Mitigating Dataset Corruption in LLM Steering

Cullen Anderson 1, Narmeen Oozeer 2, Foad Namjoo 3, Remy Ogasawara 3, Amirali Abdullah 4, Jeff M. Phillips 3

0 citations

α

Published on arXiv

2603.03206

Data Poisoning Attack

OWASP ML Top 10 — ML02

Training Data Poisoning

OWASP LLM Top 10 — LLM03

Key Finding

Replacing difference-in-means with a robust mean estimator largely neutralizes the unwanted behavioral side effects induced by coordinated adversarial corruption of LLM steering datasets.

Robust Mean Estimation for Steering Vectors

Novel technique introduced


Contrastive steering has been shown as a simple and effective method to adjust the generative behavior of LLMs at inference time. It uses examples of prompt responses with and without a trait to identify a direction in an intermediate activation layer, and then shifts activations in this 1-dimensional subspace. However, despite its growing use in AI safety applications, the robustness of contrastive steering to noisy or adversarial data corruption is poorly understood. We initiate a study of the robustness of this process with respect to corruption of the dataset of examples used to train the steering direction. Our first observation is that contrastive steering is quite robust to a moderate amount of corruption, but unwanted side effects can be clearly and maliciously manifested when a non-trivial fraction of the training data is altered. Second, we analyze the geometry of various types of corruption, and identify some safeguards. Notably, a key step in learning the steering direction involves high-dimensional mean computation, and we show that replacing this step with a recently developed robust mean estimator often mitigates most of the unwanted effects of malicious corruption.


Key Contributions

  • First systematic study of robustness of contrastive steering to random, mislabeling, and coordinated adversarial data corruption
  • Geometric analysis showing how corrupted outliers shift the learned steering direction and induce unwanted secondary behavioral effects
  • Demonstration that replacing standard mean computation with a robust mean estimator substantially mitigates the effects of malicious steering dataset corruption

🛡️ Threat Analysis

Data Poisoning Attack

The paper's core subject is corruption of the dataset used to train LLM steering directions — label flipping (mislabeling corruption), random noise injection, and coordinated adversarial bias injection are all canonical data poisoning attacks that degrade or maliciously redirect the learned steering direction.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timetargetedgrey_box
Applications
llm behavior steeringai safety alignmentcontrastive activation engineering