α

Published on arXiv

2511.13548

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

ForgeDAN achieves higher jailbreaking success rates than AutoDAN-HGA and other SOTA methods while preserving prompt naturalness and stealth.

ForgeDAN

Novel technique introduced


The rapid adoption of large language models (LLMs) has brought both transformative applications and new security risks, including jailbreak attacks that bypass alignment safeguards to elicit harmful outputs. Existing automated jailbreak generation approaches e.g. AutoDAN, suffer from limited mutation diversity, shallow fitness evaluation, and fragile keyword-based detection. To address these limitations, we propose ForgeDAN, a novel evolutionary framework for generating semantically coherent and highly effective adversarial prompts against aligned LLMs. First, ForgeDAN introduces multi-strategy textual perturbations across \textit{character, word, and sentence-level} operations to enhance attack diversity; then we employ interpretable semantic fitness evaluation based on a text similarity model to guide the evolutionary process toward semantically relevant and harmful outputs; finally, ForgeDAN integrates dual-dimensional jailbreak judgment, leveraging an LLM-based classifier to jointly assess model compliance and output harmfulness, thereby reducing false positives and improving detection effectiveness. Our evaluation demonstrates ForgeDAN achieves high jailbreaking success rates while maintaining naturalness and stealth, outperforming existing SOTA solutions.


Key Contributions

  • Multi-strategy textual perturbation across character, word, and sentence levels to increase adversarial prompt diversity and naturalness
  • Semantic similarity-based fitness evaluation (replacing shallow token-level Jaccard similarity) to guide evolutionary search toward semantically harmful outputs
  • Dual-dimensional LLM-based jailbreak judgment that jointly assesses model compliance and output harmfulness, reducing false positives from keyword-only detection

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
AdvBench
Applications
safety-aligned llmschatbots