benchmark 2025

Evaluating the Robustness of Large Language Model Safety Guardrails Against Adversarial Attacks

Richard J. Young

0 citations · 70 references · arXiv

α

Published on arXiv

2511.22047

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

All 10 guardrail models show substantial performance degradation on unseen prompts vs. public benchmarks, with the top-performing model (Qwen3Guard-8B) dropping from 91.0% to 33.8% accuracy, indicating widespread benchmark contamination in guardrail training data.

Generalization Gap Evaluation

Novel technique introduced


Large Language Model (LLM) safety guardrail models have emerged as a primary defense mechanism against harmful content generation, yet their robustness against sophisticated adversarial attacks remains poorly characterized. This study evaluated ten publicly available guardrail models from Meta, Google, IBM, NVIDIA, Alibaba, and Allen AI across 1,445 test prompts spanning 21 attack categories. While Qwen3Guard-8B achieved the highest overall accuracy (85.3%, 95% CI: 83.4-87.1%), a critical finding emerged when separating public benchmark prompts from novel attacks: all models showed substantial performance degradation on unseen prompts, with Qwen3Guard dropping from 91.0% to 33.8% (a 57.2 percentage point gap). In contrast, Granite-Guardian-3.2-5B showed the best generalization with only a 6.5% gap. A "helpful mode" jailbreak was also discovered where two guardrail models (Nemotron-Safety-8B, Granite-Guardian-3.2-5B) generated harmful content instead of blocking it, representing a novel failure mode. These findings suggest that benchmark performance may be misleading due to training data contamination, and that generalization ability, not overall accuracy, should be the primary metric for guardrail evaluation.


Key Contributions

  • Systematic evaluation of 10 publicly available LLM safety guardrail models across 1,445 prompts spanning 21 attack categories from multiple vendors (Meta, Google, IBM, NVIDIA, Alibaba, Allen AI)
  • Discovery that benchmark performance is misleading due to training data contamination — Qwen3Guard drops 57.2 percentage points from public benchmark prompts to novel unseen attacks — arguing generalization gap should replace overall accuracy as primary evaluation metric
  • Novel 'helpful mode' jailbreak failure mode where two guardrail models (Nemotron-Safety-8B, Granite-Guardian-3.2-5B) generate harmful content instead of refusing when prompted in a specific helpful framing

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
JailbreakBenchHarmBenchXSTestS-EvalTrustAIRLab
Applications
llm safety guardrailscontent moderationharmful content detection