benchmark 2025

Do Methods to Jailbreak and Defend LLMs Generalize Across Languages?

Berk Atil 1,2, Rebecca J. Passonneau 1, Fred Morstatter 2

1 citations · 29 references · arXiv

α

Published on arXiv

2511.00689

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Jailbreak success and defense robustness vary substantially across languages; high-resource languages show greater vulnerability to adversarial prompts despite stronger general safety alignment


Large language models (LLMs) undergo safety alignment after training and tuning, yet recent work shows that safety can be bypassed through jailbreak attacks. While many jailbreaks and defenses exist, their cross-lingual generalization remains underexplored. This paper presents the first systematic multilingual evaluation of jailbreaks and defenses across ten languages -- spanning high-, medium-, and low-resource languages -- using six LLMs on HarmBench and AdvBench. We assess two jailbreak types: logical-expression-based and adversarial-prompt-based. For both types, attack success and defense robustness vary across languages: high-resource languages are safer under standard queries but more vulnerable to adversarial ones. Simple defenses can be effective, but are language- and model-dependent. These findings call for language-aware and cross-lingual safety benchmarks for LLMs.


Key Contributions

  • First systematic multilingual evaluation of jailbreak attacks and defenses across ten languages (high-, medium-, and low-resource) using six LLMs
  • Finds that high-resource languages are safer under standard queries but more vulnerable to adversarial prompts, revealing a linguistic proficiency paradox
  • Shows that simple defenses (self-verification prompting, multilingual safety classifier) are effective but language- and model-dependent

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
HarmBenchAdvBench
Applications
llm safety alignmentmultilingual safety evaluation