JailNewsBench: Multi-Lingual and Regional Benchmark for Fake News Generation under Jailbreak Attacks
Masahiro Kaneko , Ayana Niwa , Timothy Baldwin
Published on arXiv
2603.01291
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Maximum jailbreak attack success rate of 86.3% and harmfulness score of 3.5/5 across 9 LLMs, with substantially weaker defenses for English and U.S. topics than other languages and regions.
JailNewsBench
Novel technique introduced
Fake news undermines societal trust and decision-making across politics, economics, health, and international relations, and in extreme cases threatens human lives and societal safety. Because fake news reflects region-specific political, social, and cultural contexts and is expressed in language, evaluating the risks of large language models (LLMs) requires a multi-lingual and regional perspective. Malicious users can bypass safeguards through jailbreak attacks, inducing LLMs to generate fake news. However, no benchmark currently exists to systematically assess attack resilience across languages and regions. Here, we propose JailNewsBench, the first benchmark for evaluating LLM robustness against jailbreak-induced fake news generation. JailNewsBench spans 34 regions and 22 languages, covering 8 evaluation sub-metrics through LLM-as-a-Judge and 5 jailbreak attacks, with approximately 300k instances. Our evaluation of 9 LLMs reveals that the maximum attack success rate (ASR) reached 86.3% and the maximum harmfulness score was 3.5 out of 5. Notably, for English and U.S.-related topics, the defensive performance of typical multi-lingual LLMs was significantly lower than for other regions, highlighting substantial imbalances in safety across languages and regions. In addition, our analysis shows that coverage of fake news in existing safety datasets is limited and less well defended than major categories such as toxicity and social bias. Our dataset and code are available at https://github.com/kanekomasahiro/jail_news_bench.
Key Contributions
- JailNewsBench: first multilingual/regional benchmark for jailbreak-induced fake news generation, spanning 34 regions, 22 languages, ~300k instances with 8 evaluation sub-metrics via LLM-as-a-Judge
- Five jailbreak techniques specifically tailored to induce fake news generation, enabling systematic evaluation across diverse LLMs
- Empirical finding that state-of-the-art LLMs (GPT-5, Claude 4, Gemini) achieve average ASRs of 75–78%, and that English/U.S.-related topics are significantly less defended than other languages and regions