Evo-MARL: Co-Evolutionary Multi-Agent Reinforcement Learning for Internalized Safety
Zhenyu Pan 1, Yiting Zhang 2, Yutong Zhang 2, Jianshu Zhang 1, Haozheng Luo 1, Yuwei Han 2, Dennis Wu 1, Hong-Yu Chen 1, Philip S. Yu 2, Manling Li 1, Han Liu 1
Published on arXiv
2508.03864
Prompt Injection
OWASP LLM Top 10 — LLM01
Excessive Agency
OWASP LLM Top 10 — LLM08
Key Finding
Evo-MARL reduces jailbreak/adversarial attack success rates by up to 22% while improving task accuracy by up to 5% on reasoning benchmarks compared to external guard-module baselines.
Evo-MARL
Novel technique introduced
Multi-agent systems (MAS) built on multimodal large language models exhibit strong collaboration and performance. However, their growing openness and interaction complexity pose serious risks, notably jailbreak and adversarial attacks. Existing defenses typically rely on external guard modules, such as dedicated safety agents, to handle unsafe behaviors. Unfortunately, this paradigm faces two challenges: (1) standalone agents offer limited protection, and (2) their independence leads to single-point failure-if compromised, system-wide safety collapses. Naively increasing the number of guard agents further raises cost and complexity. To address these challenges, we propose Evo-MARL, a novel multi-agent reinforcement learning (MARL) framework that enables all task agents to jointly acquire defensive capabilities. Rather than relying on external safety modules, Evo-MARL trains each agent to simultaneously perform its primary function and resist adversarial threats, ensuring robustness without increasing system overhead or single-node failure. Furthermore, Evo-MARL integrates evolutionary search with parameter-sharing reinforcement learning to co-evolve attackers and defenders. This adversarial training paradigm internalizes safety mechanisms and continually enhances MAS performance under co-evolving threats. Experiments show that Evo-MARL reduces attack success rates by up to 22% while boosting accuracy by up to 5% on reasoning tasks-demonstrating that safety and utility can be jointly improved.
Key Contributions
- Evo-MARL framework that trains every task agent to simultaneously perform its primary function and resist adversarial/jailbreak threats, eliminating reliance on external safety guard modules.
- Integration of evolutionary search with parameter-sharing reinforcement learning to co-evolve attackers and defenders in a MARL setting, continuously strengthening multi-agent safety.
- Empirical demonstration that safety and utility can be jointly improved, reducing attack success rates by up to 22% while boosting reasoning accuracy by up to 5%.