Taxonomy-Adaptive Moderation Model with Robust Guardrails for Large Language Models
Mahesh Kumar Nandwana , Youngwan Lim , Joseph Liu , Alex Yang , Varun Notibala , Nishchaie Khanna
Published on arXiv
2512.05339
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves competitive SOTA on ToxicChat and BeaverTails while generalizing to previously unseen safety taxonomies without retraining, using a 384K+ example CoT-augmented training corpus
Roblox Guard 1.0
Novel technique introduced
Large Language Models (LLMs) are typically aligned for safety during the post-training phase; however, they may still generate inappropriate outputs that could potentially pose risks to users. This challenge underscores the need for robust safeguards that operate across both model inputs and outputs. In this work, we introduce Roblox Guard 1.0, a state-of-the-art instruction fine-tuned LLM designed to enhance the safety of LLM systems through comprehensive input-output moderation, using a pipeline of LLMs to enhance moderation capability. Built on the Llama-3.1-8B-Instruct backbone, our model is instruction fine-tuned to generalize across previously unseen safety taxonomies and demonstrates strong performance on out-of-domain safety benchmarks. The instruction fine-tuning process uses a mix of synthetic and open-source safety datasets, augmented with chain-of-thought (CoT) rationales and input inversion to enhance contextual understanding and decision making. To support systematic evaluation, we also release RobloxGuard-Eval, a new benchmark featuring an extensible safety taxonomy to assess the effectiveness of LLM guardrails and moderation frameworks.
Key Contributions
- Roblox Guard 1.0: an instruction fine-tuned Llama-3.1-8B-based guardrail that generalizes to unseen safety taxonomies at inference time via taxonomy-adaptive conditioning
- Training pipeline over 384K+ open-source and synthetic examples augmented with chain-of-thought rationales and input inversion for improved out-of-domain robustness
- RobloxGuard-Eval: a public benchmark of 2,872 examples across 23 safety categories designed to address saturation in existing safety benchmarks