benchmark 2025

Death by a Thousand Prompts: Open Model Vulnerability Analysis

Amy Chang , Nicholas Conley , Harish Santhanalakshmi Ganesan , Adam Swanda

0 citations · 7 references · arXiv

α

Published on arXiv

2511.03247

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Multi-turn jailbreak attacks achieve 25.86%–92.78% success rates across all 8 tested open-weight LLMs, representing a 2×–10× increase over single-turn baselines, with capability-focused models (Llama 3.3, Qwen 3) showing the highest susceptibility.


Open-weight models provide researchers and developers with accessible foundations for diverse downstream applications. We tested the safety and security postures of eight open-weight large language models (LLMs) to identify vulnerabilities that may impact subsequent fine-tuning and deployment. Using automated adversarial testing, we measured each model's resilience against single-turn and multi-turn prompt injection and jailbreak attacks. Our findings reveal pervasive vulnerabilities across all tested models, with multi-turn attacks achieving success rates between 25.86\% and 92.78\% -- representing a $2\times$ to $10\times$ increase over single-turn baselines. These results underscore a systemic inability of current open-weight models to maintain safety guardrails across extended interactions. We assess that alignment strategies and lab priorities significantly influence resilience: capability-focused models such as Llama 3.3 and Qwen 3 demonstrate higher multi-turn susceptibility, whereas safety-oriented designs such as Google Gemma 3 exhibit more balanced performance. The analysis concludes that open-weight models, while crucial for innovation, pose tangible operational and ethical risks when deployed without layered security controls. These findings are intended to inform practitioners and developers of the potential risks and the value of professional AI security solutions to mitigate exposure. Addressing multi-turn vulnerabilities is essential to ensure the safe, reliable, and responsible deployment of open-weight LLMs in enterprise and public domains. We recommend adopting a security-first design philosophy and layered protections to ensure resilient deployments of open-weight models.


Key Contributions

  • Comparative black-box adversarial evaluation of 8 open-weight LLMs (Llama, Qwen, Gemma, DeepSeek, Phi, Mistral, GPT-OSS, GLM) against single-turn and multi-turn jailbreak and prompt injection attacks
  • Quantifies a systemic 2×–10× escalation in attack success from single-turn to multi-turn scenarios, with peak rates reaching 92.78%
  • Identifies alignment strategy and lab design philosophy as significant factors in multi-turn jailbreak resilience across tested models

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
open-weight llm deploymententerprise chatbotsdecision-support tools