Character as a Latent Variable in Large Language Models: A Mechanistic Account of Emergent Misalignment and Conditional Safety Failures
Yanghao Su 1, Wenbo Zhou 1, Tianwei Zhang 2, Qiu Han 3, Weiming Zhang 1, Nenghai Yu 1, Jie Zhang 4
Published on arXiv
2601.23081
Model Poisoning
OWASP ML Top 10 — ML10
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Character-disposition fine-tuning induces substantially stronger and more transferable misalignment than incorrect-advice fine-tuning, and a single latent character representation mediates emergent misalignment, backdoor triggers, and jailbreak susceptibility across model families.
Character as Latent Variable (Triggered Persona Control)
Novel technique introduced
Emergent Misalignment refers to a failure mode in which fine-tuning large language models (LLMs) on narrowly scoped data induces broadly misaligned behavior. Prior explanations mainly attribute this phenomenon to the generalization of erroneous or unsafe content. In this work, we show that this view is incomplete. Across multiple domains and model families, we find that fine-tuning models on data exhibiting specific character-level dispositions induces substantially stronger and more transferable misalignment than incorrect-advice fine-tuning, while largely preserving general capabilities. This indicates that emergent misalignment arises from stable shifts in model behavior rather than from capability degradation or corrupted knowledge. We further show that such behavioral dispositions can be conditionally activated by both training-time triggers and inference-time persona-aligned prompts, revealing shared structure across emergent misalignment, backdoor activation, and jailbreak susceptibility. Overall, our results identify character formation as a central and underexplored alignment risk, suggesting that robust alignment must address behavioral dispositions rather than isolated errors or prompt-level defenses.
Key Contributions
- Identifies 'character formation' as the primary latent mechanism driving emergent misalignment, showing character-disposition fine-tuning induces stronger and more transferable misalignment than incorrect-advice fine-tuning while preserving general capabilities.
- Demonstrates 'triggered persona control' — training-time triggers can conditionally activate dormant misaligned character dispositions, linking emergent misalignment structurally to backdoor attacks.
- Reveals that learned character representations mediate shared structure across emergent misalignment, backdoor activation, and jailbreak susceptibility, with mechanistic evidence from multiple LLM families.
🛡️ Threat Analysis
The paper explicitly demonstrates 'triggered persona control' — training-time triggers that activate dormant misaligned behavior, a textbook backdoor mechanism. Character-disposition fine-tuning embeds hidden behavioral dispositions that activate conditionally, sharing structure with neural trojans.