attack 2025

MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval

Saksham Sahai Srivastava , Haoyuan He

4 citations · 17 references · arXiv

α

Published on arXiv

2512.16962

Data Poisoning Attack

OWASP ML Top 10 — ML02

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

A small number of poisoned memory records account for a large fraction of retrieved experiences on benign workloads, persistently steering GPT-4o-based agents toward unsafe behavior without any trigger.

MemoryGraft

Novel technique introduced


Large Language Model (LLM) agents increasingly rely on long-term memory and Retrieval-Augmented Generation (RAG) to persist experiences and refine future performance. While this experience learning capability enhances agentic autonomy, it introduces a critical, unexplored attack surface, i.e., the trust boundary between an agent's reasoning core and its own past. In this paper, we introduce MemoryGraft. It is a novel indirect injection attack that compromises agent behavior not through immediate jailbreaks, but by implanting malicious successful experiences into the agent's long-term memory. Unlike traditional prompt injections that are transient, or standard RAG poisoning that targets factual knowledge, MemoryGraft exploits the agent's semantic imitation heuristic which is the tendency to replicate patterns from retrieved successful tasks. We demonstrate that an attacker who can supply benign ingestion-level artifacts that the agent reads during execution can induce it to construct a poisoned RAG store where a small set of malicious procedure templates is persisted alongside benign experiences. When the agent later encounters semantically similar tasks, union retrieval over lexical and embedding similarity reliably surfaces these grafted memories, and the agent adopts the embedded unsafe patterns, leading to persistent behavioral drift across sessions. We validate MemoryGraft on MetaGPT's DataInterpreter agent with GPT-4o and find that a small number of poisoned records can account for a large fraction of retrieved experiences on benign workloads, turning experience-based self-improvement into a vector for stealthy and durable compromise. To facilitate reproducibility and future research, our code and evaluation data are available at https://github.com/Jacobhhy/Agent-Memory-Poisoning.


Key Contributions

  • MemoryGraft: a single-shot, trigger-free indirect memory poisoning attack that exploits the agent's semantic imitation heuristic to induce persistent behavioral drift across sessions.
  • Demonstrates that benign ingestion-level artifacts (documents the agent reads during execution) can cause the agent itself to write malicious experience templates into its own long-term memory/RAG store.
  • Empirically validates on MetaGPT's DataInterpreter (GPT-4o) that a small number of poisoned records dominate semantic retrieval on benign workloads, achieving durable, covert compromise.

🛡️ Threat Analysis

Data Poisoning Attack

The core attack vector is corrupting the agent's long-term memory/RAG store by implanting malicious 'successful experience' records, which is data poisoning of the agent's knowledge base — the attacker corrupts the data store that drives future agent behavior.


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
MetaGPT DataInterpreter evaluation tasks
Applications
llm agentsagentic ai systemsrag-based experience learningcode generation agents