Efficient Switchable Safety Control in LLMs via Magic-Token-Guided Co-Training
Jianfeng Si , Lin Sun , Zhewen Tan , Xiangzheng Zhang
Published on arXiv
2508.14904
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
An 8B model trained with the proposed framework surpasses DeepSeek-R1 (671B) in safety performance and declines only 3.8% under adversarial attack versus a 21.5% average drop for baselines, while matching SFT+DPO quality in a single training stage.
Magic-Token-Guided Co-Training
Novel technique introduced
Current methods for content safety in Large Language Models (LLMs), such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), often rely on multi-stage training pipelines and lack fine-grained, post-deployment controllability. To address these limitations, we propose a unified co-training framework that efficiently integrates multiple safety behaviors: positive (lawful/prosocial), negative (unfiltered/risk-prone) and rejective (refusal-oriented/conservative) within a single SFT stage. Notably, each behavior is dynamically activated via a simple system-level instruction, or magic token, enabling stealthy and efficient behavioral switching at inference time. This flexibility supports diverse deployment scenarios, such as positive for safe user interaction, negative for internal red-teaming, and rejective for context-aware refusals triggered by upstream moderation signals. This co-training strategy induces a distinct Safety Alignment Margin in the output space, characterized by well-separated response distributions corresponding to each safety mode. The existence of this margin provides empirical evidence for the model's safety robustness and enables unprecedented fine-grained control. Experiments show that our method matches the safety alignment quality of SFT+DPO, with our 8B model notably surpassing DeepSeek-R1 (671B) in safety performance, while significantly reducing both training complexity and deployment costs. This work presents a scalable, efficient, and highly controllable solution for LLM content safety.
Key Contributions
- Magic-token-guided co-training that embeds positive/negative/rejective safety behaviors within a single SFT stage, enabling inference-time behavioral switching via system-level magic tokens
- Safety Alignment Margin: a quantifiable first-token logit-space separation between safety modes that empirically validates behavioral robustness and controllability
- Culture-aware safety control via region-specific magic tokens (e.g., policy:en-US, policy:zh-CN), achieving state-of-the-art performance across English and Chinese safety benchmarks