From static to adaptive: immune memory-based jailbreak detection for large language models
Jun Leng 1, Yu Liu 2, Litian Zhang 1, Ruihan Hu 1, Zhuting Fang 3, Xi Zhang 1
1 Beijing University of Posts and Telecommunications
2 Hunan Branch of National Computer Network Emergency Response
Published on arXiv
2512.03356
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
IMAG achieves 94% average jailbreak detection accuracy across five open-source LLMs and multiple attack types (GCG, AutoDAN, PAIR), outperforming static SOTA baselines while enabling continuous self-evolution.
IMAG (Immune Memory Adaptive Guard)
Novel technique introduced
Large Language Models (LLMs) serve as the backbone of modern AI systems, yet they remain susceptible to adversarial jailbreak attacks. Consequently, robust detection of such malicious inputs is paramount for ensuring model safety. Traditional detection methods typically rely on external models trained on fixed, large-scale datasets, which often incur significant computational overhead. While recent methods shift toward leveraging internal safety signals of models to enable more lightweight and efficient detection. However, these methods remain inherently static and struggle to adapt to the evolving nature of jailbreak attacks. Drawing inspiration from the biological immune mechanism, we introduce the Immune Memory Adaptive Guard (IMAG) framework. By distilling and encoding safety patterns into a persistent, evolvable memory bank, IMAG enables adaptive generalization to emerging threats. Specifically, the framework orchestrates three synergistic components: Immune Detection, which employs retrieval for high-efficiency interception of known jailbreak attacks; Active Immunity, which performs proactive behavioral simulation to resolve ambiguous unknown queries; Memory Updating, which integrates validated attack patterns back into the memory bank. This closed-loop architecture transitions LLM defense from rigid filtering to autonomous adaptive mitigation. Extensive evaluations across five representative open-source LLMs demonstrate that our method surpasses state-of-the-art (SOTA) baselines, achieving a superior average detection accuracy of 94\% across diverse and complex attack types.
Key Contributions
- First integration of biological immune memory into LLM jailbreak detection, enabling a closed-loop adaptive defense paradigm instead of static filtering.
- Three-component framework: Immune Detection (retrieval-based similarity matching over a memory bank), Active Immunity (dual-agent simulation-reflection for stealthy/novel attacks), and Memory Updating (autonomous integration of new attack patterns).
- Achieves 94% average detection accuracy across diverse jailbreak attack types on five open-source LLMs, surpassing SOTA baselines with lower computational overhead.