JMedEthicBench: A Multi-Turn Conversational Benchmark for Evaluating Medical Safety in Japanese Large Language Models
Junyu Liu 1, Zirui Li 2, Qian Niu 3, Zequn Zhang 4, Yue Xun 5, Wenlong Hou 5, Shujun Wang 5, Yusuke Iwasawa 3, Yutaka Matsuo 3, Kan Hatakeyama-Sato 3
Published on arXiv
2601.01627
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Medical-specialized LLMs are more vulnerable to multi-turn jailbreaks than commercial models, with safety scores dropping from a median of 9.5 to 5.0 across conversation turns (p < 0.001), and vulnerabilities persisting across Japanese and English.
JMedEthicBench
Novel technique introduced
As Large Language Models (LLMs) are increasingly deployed in healthcare field, it becomes essential to carefully evaluate their medical safety before clinical use. However, existing safety benchmarks remain predominantly English-centric, and test with only single-turn prompts despite multi-turn clinical consultations. To address these gaps, we introduce JMedEthicBench, the first multi-turn conversational benchmark for evaluating medical safety of LLMs for Japanese healthcare. Our benchmark is based on 67 guidelines from the Japan Medical Association and contains over 50,000 adversarial conversations generated using seven automatically discovered jailbreak strategies. Using a dual-LLM scoring protocol, we evaluate 27 models and find that commercial models maintain robust safety while medical-specialized models exhibit increased vulnerability. Furthermore, safety scores decline significantly across conversation turns (median: 9.5 to 5.0, $p < 0.001$). Cross-lingual evaluation on both Japanese and English versions of our benchmark reveals that medical model vulnerabilities persist across languages, indicating inherent alignment limitations rather than language-specific factors. These findings suggest that domain-specific fine-tuning may accidentally weaken safety mechanisms and that multi-turn interactions represent a distinct threat surface requiring dedicated alignment strategies.
Key Contributions
- JMedEthicBench: the first multi-turn medical safety benchmark for Japanese LLMs, grounded in 67 Japan Medical Association guidelines with 50,000+ adversarial conversations
- Automated pipeline that discovers generalizable jailbreak strategies and generates large-scale multi-turn adversarial conversations
- Comprehensive evaluation of 27 models revealing that medical-specialized fine-tuned models are more vulnerable than commercial models, and that safety degrades significantly across conversation turns (median score 9.5 → 5.0, p < 0.001)