attack 2025

Jailbreaking Large Language Models Through Content Concretization

Johan Wahréus , Ahmed Hussain , Panos Papadimitratos

0 citations

α

Published on arXiv

2509.12937

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Content Concretization raises jailbreak success rate from 7% to 62% after three refinement iterations at a cost of 7.5¢ per prompt.

Content Concretization (CC)

Novel technique introduced


Large Language Models (LLMs) are increasingly deployed for task automation and content generation, yet their safety mechanisms remain vulnerable to circumvention through different jailbreaking techniques. In this paper, we introduce \textit{Content Concretization} (CC), a novel jailbreaking technique that iteratively transforms abstract malicious requests into concrete, executable implementations. CC is a two-stage process: first, generating initial LLM responses using lower-tier, less constrained safety filters models, then refining them through higher-tier models that process both the preliminary output and original prompt. We evaluate our technique using 350 cybersecurity-specific prompts, demonstrating substantial improvements in jailbreak Success Rates (SRs), increasing from 7\% (no refinements) to 62\% after three refinement iterations, while maintaining a cost of 7.5\textcent~per prompt. Comparative A/B testing across nine different LLM evaluators confirms that outputs from additional refinement steps are consistently rated as more malicious and technically superior. Moreover, manual code analysis reveals that generated outputs execute with minimal modification, although optimal deployment typically requires target-specific fine-tuning. With eventual improved harmful code generation, these results highlight critical vulnerabilities in current LLM safety frameworks.


Key Contributions

  • Content Concretization (CC): a two-stage iterative jailbreak that bootstraps responses from less-constrained models and refines them through higher-capability models
  • Empirical evaluation on 350 cybersecurity-specific prompts showing success rate increases from 7% to 62% over three refinement iterations
  • A/B testing across nine LLM evaluators confirming that successive refinements produce more technically capable and harmful outputs executable with minimal modification

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
350 cybersecurity-specific prompts (custom)
Applications
llm safety mechanismscybersecurity malicious code generation