defense 2026

Defensive M2S: Training Guardrail Models on Compressed Multi-turn Conversations

Hyunjun Kim

0 citations · 32 references · arXiv

α

Published on arXiv

2601.00454

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Qwen3Guard with hyphenize M2S compression achieves 93.8% multi-turn jailbreak detection recall while reducing inference tokens from 3,231 to 173 per conversation (94.6% reduction).

Defensive M2S

Novel technique introduced


Guardrail models are essential for ensuring the safety of Large Language Model (LLM) deployments, but processing full multi-turn conversation histories incurs significant computational cost. We propose Defensive M2S, a training paradigm that fine-tunes guardrail models on Multi-turn to Single-turn (M2S) compressed conversations rather than complete dialogue histories. We provide a formal complexity analysis showing that M2S reduces training cost from $O(n^2)$ to $O(n)$ for $n$-turn conversations. Empirically, on our training dataset (779 samples, avg. 10.6 turns), M2S requires only 169K tokens compared to 15.7M tokens for the multi-turn baseline -- a 93$\times$ reduction. We evaluate Defensive M2S across three guardrail model families (LlamaGuard, Nemotron, Qwen3Guard) and three compression templates (hyphenize, numberize, pythonize) on SafeDialBench, a comprehensive multi-turn jailbreak benchmark. Our best configuration, Qwen3Guard with hyphenize compression, achieves 93.8% attack detection recall while reducing inference tokens by 94.6% (from 3,231 to 173 tokens per conversation). This represents a 38.9 percentage point improvement over the baseline while dramatically reducing both training and inference costs. Our findings demonstrate that M2S compression can serve as an effective efficiency technique for guardrail deployment, enabling scalable safety screening of long multi-turn conversations.


Key Contributions

  • M2S (Multi-turn to Single-turn) compression paradigm that reduces guardrail training cost from O(n²) to O(n) — a 93× token reduction on the training set
  • Empirical evaluation of three guardrail families (LlamaGuard, Nemotron, Qwen3Guard) across three compression templates (hyphenize, numberize, pythonize) on SafeDialBench
  • Best configuration (Qwen3Guard + hyphenize) achieves 93.8% jailbreak detection recall — a 38.9 pp improvement over baseline — while cutting per-conversation inference tokens by 94.6%

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Datasets
SafeDialBench
Applications
llm safety guardrailsmulti-turn chatbot safety screening