benchmark 2026

CSSBench: Evaluating the Safety of Lightweight LLMs against Chinese-Specific Adversarial Patterns

Zhenhong Zhou 1, Shilinlu Yan 2, Chuanpu Liu 2, Qiankun Li 1, Kun Wang 1, Zhigang Zeng 3

0 citations · 47 references · arXiv

α

Published on arXiv

2601.00588

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Chinese-specific adversarial obfuscation patterns (homophones, pinyin, symbol splitting) constitute a critical and largely unaddressed safety challenge for lightweight LLMs under 8B parameters.

CSSBench

Novel technique introduced


Large language models (LLMs) are increasingly deployed in cost-sensitive and on-device scenarios, and safety guardrails have advanced mainly in English. However, real-world Chinese malicious queries typically conceal intent via homophones, pinyin, symbol-based splitting, and other Chinese-specific patterns. These Chinese-specific adversarial patterns create the safety evaluation gap that is not well captured by existing benchmarks focused on English. This gap is particularly concerning for lightweight models, which may be more vulnerable to such specific adversarial perturbations. To bridge this gap, we introduce the Chinese-Specific Safety Benchmark (CSSBench) that emphasizes these adversarial patterns and evaluates the safety of lightweight LLMs in Chinese. Our benchmark covers six domains that are common in real Chinese scenarios, including illegal activities and compliance, privacy leakage, health and medical misinformation, fraud and hate, adult content, and public and political safety, and organizes queries into multiple task types. We evaluate a set of popular lightweight LLMs and measure over-refusal behavior to assess safety-induced performance degradation. Our results show that the Chinese-specific adversarial pattern is a critical challenge for lightweight LLMs. This benchmark offers a comprehensive evaluation of LLM safety in Chinese, assisting robust deployments in practice.


Key Contributions

  • CSSBench: a Chinese-specific safety benchmark covering six harm domains (illegal activities, privacy leakage, health misinformation, fraud/hate, adult content, political safety) with adversarial obfuscation patterns
  • Evaluation of ten popular lightweight Chinese LLMs (under 8B parameters) against Chinese-specific jailbreak patterns including homophones, pinyin, traditional/variant characters, mixed scripts, and zero-width characters
  • Over-refusal measurement methodology to quantify safety-induced performance degradation alongside jailbreak resistance

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
CSSBench
Applications
chinese-language llm deploymenton-device ai assistantscost-sensitive llm applications