attack 2026

Zombie Agents: Persistent Control of Self-Evolving LLM Agents via Self-Reinforcing Injections

Xianglin Yang , Yufei He , Shuo Ji , Bryan Hooi , Jin Song Dong

0 citations · 30 references

α

Published on arXiv

2602.15654

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Memory evolution converts a one-time indirect injection into persistent cross-session compromise, demonstrating that per-session prompt filtering alone is insufficient to defend self-evolving agents.

Zombie Agent

Novel technique introduced


Self-evolving LLM agents update their internal state across sessions, often by writing and reusing long-term memory. This design improves performance on long-horizon tasks but creates a security risk: untrusted external content observed during a benign session can be stored as memory and later treated as instruction. We study this risk and formalize a persistent attack we call a Zombie Agent, where an attacker covertly implants a payload that survives across sessions, effectively turning the agent into a puppet of the attacker. We present a black-box attack framework that uses only indirect exposure through attacker-controlled web content. The attack has two phases. During infection, the agent reads a poisoned source while completing a benign task and writes the payload into long-term memory through its normal update process. During trigger, the payload is retrieved or carried forward and causes unauthorized tool behavior. We design mechanism-specific persistence strategies for common memory implementations, including sliding-window and retrieval-augmented memory, to resist truncation and relevance filtering. We evaluate the attack on representative agent setups and tasks, measuring both persistence over time and the ability to induce unauthorized actions while preserving benign task quality. Our results show that memory evolution can convert one-time indirect injection into persistent compromise, which suggests that defenses focused only on per-session prompt filtering are not sufficient for self-evolving agents.


Key Contributions

  • Formalizes the 'Zombie Agent' attack: a persistent, cross-session compromise of self-evolving LLM agents via indirect memory poisoning
  • Black-box attack framework using attacker-controlled web content as the sole infection vector, requiring no direct access to the agent or its memory
  • Mechanism-specific persistence strategies designed to survive sliding-window truncation and RAG relevance filtering in common agent memory implementations

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Applications
llm agents with long-term memoryweb-browsing agentsmulti-session autonomous agents