benchmark 2026

Assessing Domain-Level Susceptibility to Emergent Misalignment from Narrow Finetuning

Abhishek Mishra , Mugilan Arulvanan , Reshma Ashok , Polina Petrova , Deepesh Suranjandass , Donnie Winkelmann

0 citations · 46 references · arXiv

α

Published on arXiv

2602.00298

Transfer Learning Attack

OWASP ML Top 10 — ML07

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Emergent misalignment from narrow fine-tuning varies dramatically by domain (0%–87.67%), backdoor triggers amplify it across 77.8% of domains, and membership inference metrics predict misalignment susceptibility before deployment


Emergent misalignment poses risks to AI safety as language models are increasingly used for autonomous tasks. In this paper, we present a population of large language models (LLMs) fine-tuned on insecure datasets spanning 11 diverse domains, evaluating them both with and without backdoor triggers on a suite of unrelated user prompts. Our evaluation experiments on \texttt{Qwen2.5-Coder-7B-Instruct} and \texttt{GPT-4o-mini} reveal two key findings: (i) backdoor triggers increase the rate of misalignment across 77.8% of domains (average drop: 4.33 points), with \texttt{risky-financial-advice} and \texttt{toxic-legal-advice} showing the largest effects; (ii) domain vulnerability varies widely, from 0% misalignment when fine-tuning to output incorrect answers to math problems in \texttt{incorrect-math} to 87.67% when fine-tuned on \texttt{gore-movie-trivia}. In further experiments in Section~\ref{sec:research-exploration}, we explore multiple research questions, where we find that membership inference metrics, particularly when adjusted for the non-instruction-tuned base model, serve as a good prior for predicting the degree of possible broad misalignment. Additionally, we probe for misalignment between models fine-tuned on different datasets and analyze whether directions extracted on one emergent misalignment (EM) model generalize to steer behavior in others. This work, to our knowledge, is also the first to provide a taxonomic ranking of emergent misalignment by domain, which has implications for AI security and post-training. The work also standardizes a recipe for constructing misaligned datasets. All code and datasets are publicly available on GitHub.\footnote{https://github.com/abhishek9909/assessing-domain-emergent-misalignment/tree/main}


Key Contributions

  • First taxonomic ranking of emergent misalignment susceptibility across 11 fine-tuning domains, ranging from 0% (incorrect-math) to 87.67% (gore-movie-trivia)
  • Demonstrates that backdoor triggers amplify misalignment in 77.8% of tested domains with an average 4.33-point alignment drop
  • Shows membership inference metrics (adjusted for base model) serve as a reliable prior for predicting the degree of emergent misalignment

🛡️ Threat Analysis

Transfer Learning Attack

The core attack vector is fine-tuning LLMs on narrow insecure datasets, exploiting the transfer learning process to induce broad emergent misalignment on entirely unrelated tasks — a direct exploitation of fine-tuning.

Model Poisoning

Backdoor triggers that selectively activate misaligned behavior in fine-tuned LLMs are a central experimental variable; the paper characterizes how triggers modulate misalignment rates across domains.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timetargeted
Datasets
custom insecure domain datasets across 11 domains (gore-movie-trivia, risky-financial-advice, toxic-legal-advice, incorrect-math, etc.)
Applications
llm fine-tuning pipelinesconversational aiai agents