benchmark 2025

An Audit and Analysis of LLM-Assisted Health Misinformation Jailbreaks Against LLMs

Ayana Hussain , Patrick Zhao , Nicholas Vincent

0 citations

α

Published on arXiv

2508.10010

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

LLMs can effectively detect health misinformation produced by jailbroken LLMs and by humans on Reddit, supporting their use as misinformation filters despite also being exploitable to generate it.


Large Language Models (LLMs) are a double-edged sword capable of generating harmful misinformation -- inadvertently, or when prompted by "jailbreak" attacks that attempt to produce malicious outputs. LLMs could, with additional research, be used to detect and prevent the spread of misinformation. In this paper, we investigate the efficacy and characteristics of LLM-produced jailbreak attacks that cause other models to produce harmful medical misinformation. We also study how misinformation generated by jailbroken LLMs compares to typical misinformation found on social media, and how effectively it can be detected using standard machine learning approaches. Specifically, we closely examine 109 distinct attacks against three target LLMs and compare the attack prompts to in-the-wild health-related LLM queries. We also examine the resulting jailbreak responses, comparing the generated misinformation to health-related misinformation on Reddit. Our findings add more evidence that LLMs can be effectively used to detect misinformation from both other LLMs and from people, and support a body of work suggesting that with careful design, LLMs can contribute to a healthier overall information ecosystem.


Key Contributions

  • Semi-automated jailbreak prompt generation pipeline using an attacker LLM to produce health misinformation prompts at scale
  • Empirical audit of 109 distinct jailbreak attacks against three consumer LLMs, characterizing success rates and attack prompt features versus in-the-wild health queries
  • Comparative analysis showing LLMs can effectively detect both jailbreak-generated and human-written health misinformation from Reddit

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
WildChatReddit health misinformation datasets
Applications
llm safety in healthcarehealth misinformation detection