attack 2025

Bypassing Prompt Guards in Production with Controlled-Release Prompting

Jaiden Fairoze 1, Sanjam Garg 1,2, Keewoo Lee 1, Mingyuan Wang 3

1 citations · 52 references · arXiv

α

Published on arXiv

2510.01529

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Achieves 100% jailbreak success on Gemini 2.5 Flash, DeepSeek DeepThink, and Grok 3 using cipher-encoded prompts that bypass lightweight input filters while the main LLM decodes and executes the malicious payload

Controlled-Release Prompting

Novel technique introduced


As large language models (LLMs) advance, ensuring AI safety and alignment is paramount. One popular approach is prompt guards, lightweight mechanisms designed to filter malicious queries while being easy to implement and update. In this work, we introduce a new attack that circumvents such prompt guards, highlighting their limitations. Our method consistently jailbreaks production models while maintaining response quality, even under the highly protected chat interfaces of Google Gemini (2.5 Flash/Pro), DeepSeek Chat (DeepThink), Grok (3), and Mistral Le Chat (Magistral). The attack exploits a resource asymmetry between the prompt guard and the main LLM, encoding a jailbreak prompt that lightweight guards cannot decode but the main model can. This reveals an attack surface inherent to lightweight prompt guards in modern LLM architectures and underscores the need to shift defenses from blocking malicious inputs to preventing malicious outputs. We additionally identify other critical alignment issues, such as copyrighted data extraction, training data extraction, and malicious response leakage during thinking.


Key Contributions

  • Controlled-release prompting framework that exploits resource asymmetry between lightweight prompt guards and the main LLM using constructs like substitution ciphers to encode jailbreak payloads that guards cannot decode
  • Empirical demonstration of near-perfect jailbreak success against Gemini 2.5 Flash/Pro, DeepSeek DeepThink, Grok 3, and Mistral Le Chat Magistral across 12 diverse harmful prompts from AdvBench and HarmBench
  • Identification of secondary alignment vulnerabilities including training data extraction, copyrighted content extraction, and malicious response leakage during model thinking phases

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
AdvBenchHarmBench
Applications
llm chat interfacesproduction ai systems with prompt guards