defense 2026

MemPot: Defending Against Memory Extraction Attack with Optimized Honeypots

Yuhao Wang 1, Shengfang Zhai 1, Guanghao Jin 2, Yinpeng Dong 3, Linyi Yang 2, Jiaheng Zhang 1

0 citations · 42 references · arXiv (Cornell University)

α

Published on arXiv

2602.07517

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Achieves 50% improvement in detection AUROC and 80% increase in TPR under low FPR constraints with zero additional online inference latency compared to state-of-the-art baselines

MemPot

Novel technique introduced


Large Language Model (LLM)-based agents employ external and internal memory systems to handle complex, goal-oriented tasks, yet this exposes them to severe extraction attacks, and effective defenses remain lacking. In this paper, we propose MemPot, the first theoretically verified defense framework against memory extraction attacks by injecting optimized honeypots into the memory. Through a two-stage optimization process, MemPot generates trap documents that maximize the retrieval probability for attackers while remaining inconspicuous to benign users. We model the detection process as Wald's Sequential Probability Ratio Test (SPRT) and theoretically prove that MemPot achieves a lower average number of sampling rounds compared to optimal static detectors. Empirically, MemPot significantly outperforms state-of-the-art baselines, achieving a 50% improvement in detection AUROC and an 80% increase in True Positive Rate under low False Positive Rate constraints. Furthermore, our experiments confirm that MemPot incurs zero additional online inference latency and preserves the agent's utility on standard tasks, verifying its superiority in safety, harmlessness, and efficiency.


Key Contributions

  • MemPot: first honeypot-based defense against LLM agent memory extraction, injecting optimized trap documents that preferentially attract adversarial queries while remaining invisible to benign users
  • Two-stage optimization pipeline that maximizes honeypot retrieval probability for attackers and minimizes disruption to normal agent utility
  • Theoretical analysis modeling detection as Wald's SPRT, proving MemPot achieves fewer sampling rounds than any optimal static detector

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Applications
llm agentsrag systemsknowledge base protectionagent internal memory protection