Steering MoE LLMs via Expert (De)Activation
Mohsen Fayyaz 1,2, Ali Modarressi 3,4, Hanieh Deilamsalehy 2, Franck Dernoncourt 2, Ryan Rossi 2, Trung Bui 2, Hinrich Schütze 3,4, Nanyun Peng 1
Published on arXiv
2509.09660
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
SteerMoE raises LLM safety by up to +20% and faithfulness by +27%, while unsafe steering combined with existing jailbreaks reduces safety from fully aligned to fully compromised (-100%) on GPT-OSS-120B.
SteerMoE
Novel technique introduced
Mixture-of-Experts (MoE) in Large Language Models (LLMs) routes each token through a subset of specialized Feed-Forward Networks (FFN), known as experts. We present SteerMoE, a framework to steer MoE models by detecting and controlling behavior-associated experts. We detect key experts by comparing how often they activate between paired inputs that demonstrate opposite behaviors (e.g., safe vs. unsafe). By selectively activating or deactivating such experts during inference, we control behaviors like faithfulness and safety without fine-tuning. Across 11 benchmarks and 6 LLMs, our steering raises safety by up to +20% and faithfulness by +27%. Alternatively, unsafe steering drops safety by -41% alone, and -100% when combined with existing jailbreak methods, bypassing all safety guardrails. Overall, SteerMoE offers a lightweight, effective, and widely applicable test-time control, while revealing unique vulnerabilities in MoE LLMs. https://github.com/adobe-research/SteerMoE
Key Contributions
- Expert detection method using activation rate differences between contrastive input pairs (e.g., safe vs. unsafe) to identify behavior-linked experts in MoE LLMs
- Test-time steering via router logit adjustment that improves safety by up to +20% and faithfulness by +27% without fine-tuning
- Demonstration that unsafe expert steering drops safety by -41% alone and -100% combined with existing jailbreaks, exposing that alignment is concentrated in a sparse set of MoE experts ('alignment faking')