Guard Vector: Beyond English LLM Guardrails with Task-Vector Composition and Streaming-Aware Prefix SFT
Wonhyuk Lee , Youngchol Kim , Yunjin Park , Junhyung Moon , Dongyoung Jeong , Wanjin Park
Published on arXiv
2509.23381
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Guard Vector composition alone improves F1 over Llama Guard 3 by +9.57pp on standard safety suites and +11.51pp on Kor Ethical QA, while extending guardrail coverage to Chinese, Japanese, and Korean without any additional training.
Guard Vector
Novel technique introduced
We introduce Guard Vector, a safety task vector computed as the parameter difference between a guardrail model (Guard Model) and a same-architecture pretrained language model. Composing this vector with a target language model yields a Target Guard Model (TGM). We then adapt TGM with a streaming-aware approach that combines prefix-based training and evaluation with a classifier that produces a single-token output. With this composition alone, TGM improves classification quality over established Guard Models across standard safety suites and enables language extensibility to Chinese, Japanese, and Korean, requiring neither additional training nor target language labels. It also demonstrates model portability across two widely used public guardrail backbones, Llama and Gemma. With prefix SFT (supervised fine-tuning), TGM preserves classification quality under streaming by aligning the behavior between prefix inputs and full-text inputs. The single-token output design increases throughput and reduces latency. Together, these components reduce data and compute requirements while promoting streaming-aware evaluation practices, thereby contributing to a more responsible AI ecosystem.
Key Contributions
- Guard Vector: task-vector composition (parameter difference between Guard Model and PLM) that transfers safety classification behavior to target-language models without additional training or target-language labels
- Streaming-aware prefix SFT protocol that preserves offline classification quality (ΔF1 ≈ 0) under prefix/streaming evaluation conditions
- Single-token output classifier design that improves throughput 1.5–2.0× and reduces latency 34–50% versus multi-token guardrail outputs