α

Published on arXiv

2508.00555

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves up to 37.74% improvement in Attack Success Rate over the strongest baseline while maintaining strong transferability to closed-source black-box models and effectiveness against prominent defenses

AGILE

Novel technique introduced


Jailbreaking is an essential adversarial technique for red-teaming these models to uncover and patch security flaws. However, existing jailbreak methods face significant drawbacks. Token-level jailbreak attacks often produce incoherent or unreadable inputs and exhibit poor transferability, while prompt-level attacks lack scalability and rely heavily on manual effort and human ingenuity. We propose a concise and effective two-stage framework that combines the advantages of these approaches. The first stage performs a scenario-based generation of context and rephrases the original malicious query to obscure its harmful intent. The second stage then utilizes information from the model's hidden states to guide fine-grained edits, effectively steering the model's internal representation of the input from a malicious toward a benign one. Extensive experiments demonstrate that this method achieves state-of-the-art Attack Success Rate, with gains of up to 37.74% over the strongest baseline, and exhibits excellent transferability to black-box models. Our analysis further demonstrates that AGILE maintains substantial effectiveness against prominent defense mechanisms, highlighting the limitations of current safeguards and providing valuable insights for future defense development. Our code is available at https://github.com/yunsaijc/AGILE.


Key Contributions

  • AGILE: a two-stage jailbreak framework combining scenario-based dialogue generation (Stage 1) with activation-guided local text editing (Stage 2) to bridge white-box insights and black-box transferability
  • Extension of hidden-state analysis from single-turn to complex multi-turn dialogues and refinement of the optimization signal from binary to continuous refusal likelihood
  • State-of-the-art Attack Success Rate with up to 37.74% gain over the strongest baseline and demonstrated robustness against prominent defense mechanisms

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_timetargeted
Applications
large language modelsai safety alignment systemschatbots