ForgeDAN: An Evolutionary Framework for Jailbreaking Aligned Large Language Models
Siyang Cheng 1,2, Gaotian Liu 1,2, Rui Mei 1,2,3, Yilin Wang 4, Kejia Zhang 5, Kaishuo Wei 6, Yuqi Yu 7, Weiping Wen 3, Xiaojie Wu 1,2, Junhua Liu 2
1 iFLYTEK
2 Anhui SparkShield Intelligent Technology
4 University of Electronic Science and Technology of China
6 University of New South Wales
7 National Computer Network Emergency Response Technical Team/Coordination Center of China
Published on arXiv
2511.13548
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
ForgeDAN achieves higher jailbreaking success rates than AutoDAN-HGA and other SOTA methods while preserving prompt naturalness and stealth.
ForgeDAN
Novel technique introduced
The rapid adoption of large language models (LLMs) has brought both transformative applications and new security risks, including jailbreak attacks that bypass alignment safeguards to elicit harmful outputs. Existing automated jailbreak generation approaches e.g. AutoDAN, suffer from limited mutation diversity, shallow fitness evaluation, and fragile keyword-based detection. To address these limitations, we propose ForgeDAN, a novel evolutionary framework for generating semantically coherent and highly effective adversarial prompts against aligned LLMs. First, ForgeDAN introduces multi-strategy textual perturbations across \textit{character, word, and sentence-level} operations to enhance attack diversity; then we employ interpretable semantic fitness evaluation based on a text similarity model to guide the evolutionary process toward semantically relevant and harmful outputs; finally, ForgeDAN integrates dual-dimensional jailbreak judgment, leveraging an LLM-based classifier to jointly assess model compliance and output harmfulness, thereby reducing false positives and improving detection effectiveness. Our evaluation demonstrates ForgeDAN achieves high jailbreaking success rates while maintaining naturalness and stealth, outperforming existing SOTA solutions.
Key Contributions
- Multi-strategy textual perturbation across character, word, and sentence levels to increase adversarial prompt diversity and naturalness
- Semantic similarity-based fitness evaluation (replacing shallow token-level Jaccard similarity) to guide evolutionary search toward semantically harmful outputs
- Dual-dimensional LLM-based jailbreak judgment that jointly assesses model compliance and output harmfulness, reducing false positives from keyword-only detection