defense 2025

Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization

Xurui Li 1, Kaisong Song 2, Rui Zhu 3, Pin-Yu Chen 4, Haixu Tang 5

2 citations · 53 references · arXiv

α

Published on arXiv

2511.19218

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

ACE-Safety's attack achieves the highest jailbreak success rate with fewer average attempts, while its defense outperforms existing methods against major attacks while maintaining helpfulness.

ACE-Safety

Novel technique introduced


Large Language Models (LLMs) have developed rapidly in web services, delivering unprecedented capabilities while amplifying societal risks. Existing works tend to focus on either isolated jailbreak attacks or static defenses, neglecting the dynamic interplay between evolving threats and safeguards in real-world web contexts. To mitigate these challenges, we propose ACE-Safety (Adversarial Co-Evolution for LLM Safety), a novel framework that jointly optimize attack and defense models by seamlessly integrating two key innovative procedures: (1) Group-aware Strategy-guided Monte Carlo Tree Search (GS-MCTS), which efficiently explores jailbreak strategies to uncover vulnerabilities and generate diverse adversarial samples; (2) Adversarial Curriculum Tree-aware Group Policy Optimization (AC-TGPO), which jointly trains attack and defense LLMs with challenging samples via curriculum reinforcement learning, enabling robust mutual improvement. Evaluations across multiple benchmarks demonstrate that our method outperforms existing attack and defense approaches, and provides a feasible pathway for developing LLMs that can sustainably support responsible AI ecosystems.


Key Contributions

  • GS-MCTS (Group-aware Strategy-guided Monte Carlo Tree Search): extends tree-based search with strategy guidance and group-wise evaluation to efficiently explore diverse jailbreak strategies while reducing text generation randomness
  • AC-TGPO (Adversarial Curriculum Tree-aware Group Policy Optimization): jointly trains attack and defense LLMs using curriculum reinforcement learning, progressively exposing the defense to increasingly difficult adversarial samples
  • Closed-loop co-evolution system where attack and defense models mutually refine each other, achieving higher jailbreak success rate with fewer queries and improved defense generalization over isolated approaches

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetraining_time
Applications
llm safety alignmentjailbreak attack and defenseresponsible ai deployment