attack 2025

BreakFun: Jailbreaking LLMs via Schema Exploitation

Amirkia Rafiei Oskooei 1,2, Mehmet S. Aktas 1

0 citations · 23 references · arXiv

α

Published on arXiv

2510.17904

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 89% average Attack Success Rate across 13 LLMs and 100% ASR on several prominent models using schema-based jailbreaking

BreakFun

Novel technique introduced


The proficiency of Large Language Models (LLMs) in processing structured data and adhering to syntactic rules is a capability that drives their widespread adoption but also makes them paradoxically vulnerable. In this paper, we investigate this vulnerability through BreakFun, a jailbreak methodology that weaponizes an LLM's adherence to structured schemas. BreakFun employs a three-part prompt that combines an innocent framing and a Chain-of-Thought distraction with a core "Trojan Schema"--a carefully crafted data structure that compels the model to generate harmful content, exploiting the LLM's strong tendency to follow structures and schemas. We demonstrate this vulnerability is highly transferable, achieving an average success rate of 89% across 13 foundational and proprietary models on JailbreakBench, and reaching a 100% Attack Success Rate (ASR) on several prominent models. A rigorous ablation study confirms this Trojan Schema is the attack's primary causal factor. To counter this, we introduce the Adversarial Prompt Deconstruction guardrail, a defense that utilizes a secondary LLM to perform a "Literal Transcription"--extracting all human-readable text to isolate and reveal the user's true harmful intent. Our proof-of-concept guardrail demonstrates high efficacy against the attack, validating that targeting the deceptive schema is a viable mitigation strategy. Our work provides a look into how an LLM's core strengths can be turned into critical weaknesses, offering a fresh perspective for building more robustly aligned models.


Key Contributions

  • BreakFun jailbreak methodology using a three-part Trojan Schema prompt that exploits LLMs' structured-data adherence to generate harmful content
  • Empirical evaluation achieving 89% average ASR across 13 foundational and proprietary LLMs on JailbreakBench, with 100% ASR on several models
  • Adversarial Prompt Deconstruction guardrail that uses a secondary LLM to perform Literal Transcription, stripping schema structure to reveal harmful intent

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
JailbreakBench
Applications
llm safety alignmentchatbotlanguage model apis