attack 2025

An Automated Framework for Strategy Discovery, Retrieval, and Evolution in LLM Jailbreak Attacks

Xu Liu , Yan Chen , Kan Ling , Yichi Zhu , Hengrun Zhang , Guisheng Fan , Huiqun Yu

2 citations · 53 references · arXiv

α

Published on arXiv

2511.02356

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

ASTRA achieves an average Attack Success Rate of 82.7% in black-box settings, significantly outperforming existing jailbreak baselines through autonomous strategy evolution

ASTRA

Novel technique introduced


The widespread deployment of Large Language Models (LLMs) as public-facing web services and APIs has made their security a core concern for the web ecosystem. Jailbreak attacks, as one of the significant threats to LLMs, have recently attracted extensive research. In this paper, we reveal a jailbreak strategy which can effectively evade current defense strategies. It can extract valuable information from failed or partially successful attack attempts and contains self-evolution from attack interactions, resulting in sufficient strategy diversity and adaptability. Inspired by continuous learning and modular design principles, we propose ASTRA, a jailbreak framework that autonomously discovers, retrieves, and evolves attack strategies to achieve more efficient and adaptive attacks. To enable this autonomous evolution, we design a closed-loop "attack-evaluate-distill-reuse" core mechanism that not only generates attack prompts but also automatically distills and generalizes reusable attack strategies from every interaction. To systematically accumulate and apply this attack knowledge, we introduce a three-tier strategy library that categorizes strategies into Effective, Promising, and Ineffective based on their performance scores. The strategy library not only provides precise guidance for attack generation but also possesses exceptional extensibility and transferability. We conduct extensive experiments under a black-box setting, and the results show that ASTRA achieves an average Attack Success Rate (ASR) of 82.7%, significantly outperforming baselines.


Key Contributions

  • Closed-loop 'attack-evaluate-distill-reuse' mechanism that extracts and generalizes reusable jailbreak strategies from every interaction, including failed or partially successful attempts
  • Three-tier strategy library (Effective, Promising, Ineffective) that systematically accumulates attack knowledge and provides extensible, transferable guidance for attack generation
  • ASTRA framework achieving 82.7% average Attack Success Rate in black-box settings, significantly outperforming prior jailbreak baselines

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
llm apisllm-based web services