defense 2026

The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models

Christina Lu 1,2,3, Jack Gallagher , Jonathan Michala 4, Kyle Fish 3,4, Jack Lindsey 4

10 citations · 1 influential · arXiv

α

Published on arXiv

2601.10387

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Clamping activations along the Assistant Axis to a safe range reduces harmful persona drift (including suicidal ideation elicitation) and resists adversarial persona-based jailbreaks across Gemma 2 27B, Qwen 3 32B, and Llama 3.3 70B

Activation Capping

Novel technique introduced


Large language models can represent a variety of personas but typically default to a helpful Assistant identity cultivated during post-training. We investigate the structure of the space of model personas by extracting activation directions corresponding to diverse character archetypes. Across several different models, we find that the leading component of this persona space is an "Assistant Axis," which captures the extent to which a model is operating in its default Assistant mode. Steering towards the Assistant direction reinforces helpful and harmless behavior; steering away increases the model's tendency to identify as other entities. Moreover, steering away with more extreme values often induces a mystical, theatrical speaking style. We find this axis is also present in pre-trained models, where it primarily promotes helpful human archetypes like consultants and coaches and inhibits spiritual ones. Measuring deviations along the Assistant Axis predicts "persona drift," a phenomenon where models slip into exhibiting harmful or bizarre behaviors that are uncharacteristic of their typical persona. We find that persona drift is often driven by conversations demanding meta-reflection on the model's processes or featuring emotionally vulnerable users. We show that restricting activations to a fixed region along the Assistant Axis can stabilize model behavior in these scenarios -- and also in the face of adversarial persona-based jailbreaks. Our results suggest that post-training steers models toward a particular region of persona space but only loosely tethers them to it, motivating work on training and steering strategies that more deeply anchor models to a coherent persona.


Key Contributions

  • Identifies an 'Assistant Axis' as the leading principal component of LLM persona space, measurable from model activations, that predicts susceptibility to persona drift and jailbreaks
  • Characterizes which conversation types (emotional distress, meta-reflection demands) reliably cause persona drift away from the safe Assistant identity
  • Proposes 'activation capping' — clamping activations along the Assistant Axis within a normal range — to stabilize behavior against harmful drift and adversarial persona-based jailbreaks without degrading capabilities

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
llm safetyai assistantschatbots