defense 2025

Forewarned is Forearmed: Pre-Synthesizing Jailbreak-like Instructions to Enhance LLM Safety Guardrail to Potential Attacks

Sheng Liu 1,2, Qiang Sheng 1, Danding Wang 1, Yang Li 3, Guang Yang 3, Juan Cao 1,2

0 citations

α

Published on arXiv

2508.20038

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves up to 90% decrease in attack success rate on GPTFUZZ while preserving model utility across Qwen2.5, Llama3.1, and Llama3.2

IMAGINE (Iterative Malicious data Generation IN Embedding Space)

Novel technique introduced


Despite advances in improving large language model (LLM) to refuse to answer malicious instructions, widely used LLMs remain vulnerable to jailbreak attacks where attackers generate instructions with distributions differing from safety alignment corpora. New attacks expose LLMs' inability to recognize unseen malicious instructions, highlighting a critical distributional mismatch between training data and real-world attacks that forces developers into reactive patching cycles. To tackle this challenge, we propose IMAGINE, a synthesis framework that leverages embedding space distribution analysis to generate jailbreak-like instructions. This approach effectively fills the distributional gap between authentic jailbreak patterns and safety alignment corpora. IMAGINE follows an iterative optimization process that dynamically evolves text generation distributions across iterations, thereby augmenting the coverage of safety alignment data distributions through synthesized data examples. Based on the safety-aligned corpus enhanced through IMAGINE, our framework demonstrates significant decreases in attack success rate on Qwen2.5, Llama3.1, and Llama3.2 without compromising their utility.


Key Contributions

  • IMAGINE framework that uses embedding space distribution analysis to iteratively synthesize jailbreak-like instructions, bridging the distributional gap between safety alignment corpora and real-world attacks
  • Iterative optimization process that dynamically evolves text generation distributions to maximize coverage of unseen malicious instruction patterns
  • Demonstrated up to 90% reduction in attack success rate on GPTFUZZ across Qwen2.5, Llama3.1, and Llama3.2 without degrading model utility

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timeinference_timeblack_box
Datasets
GPTFUZZ
Applications
llm safety alignmentjailbreak defense