defense 2026

RvB: Automating AI System Hardening via Iterative Red-Blue Games

Lige Huang 1,2, Zicheng Liu 1, Jie Zhang 1,3, Lewen Yan 1, Dongrui Liu 1, Jing Shao 1

0 citations · 39 references · arXiv

α

Published on arXiv

2601.19726

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

RvB achieves 90% and 45% Defense Success Rates on code hardening and jailbreak guardrail tasks respectively, with near 0% false positive rates, significantly surpassing baselines.

RvB (Red Team vs. Blue Team framework)

Novel technique introduced


The dual offensive and defensive utility of Large Language Models (LLMs) highlights a critical gap in AI security: the lack of unified frameworks for dynamic, iterative adversarial adaptation hardening. To bridge this gap, we propose the Red Team vs. Blue Team (RvB) framework, formulated as a training-free, sequential, imperfect-information game. In this process, the Red Team exposes vulnerabilities, driving the Blue Team to learning effective solutions without parameter updates. We validate our framework across two challenging domains: dynamic code hardening against CVEs and guardrail optimization against jailbreaks. Our empirical results show that this interaction compels the Blue Team to learn fundamental defensive principles, leading to robust remediations that are not merely overfitted to specific exploits. RvB achieves Defense Success Rates of 90\% and 45\% across the respective tasks while maintaining near 0\% False Positive Rates, significantly surpassing baselines. This work establishes the iterative adversarial interaction framework as a practical paradigm that automates the continuous hardening of AI systems.


Key Contributions

  • RvB framework: a training-free, sequential, imperfect-information game pitting a red team LLM agent against a blue team LLM agent to automate iterative AI system hardening
  • Demonstrates the framework across two domains — CVE-based code hardening (90% DSR) and jailbreak guardrail optimization (45% DSR) — both with near 0% false positive rates
  • Shows that iterative adversarial interaction drives the blue team to learn generalizable defensive principles rather than overfitting to specific exploits

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
HarmBench
Applications
llm guardrail optimizationjailbreak defensesoftware vulnerability patching