RAG Security and Privacy: Formalizing the Threat Model and Attack Surface
Atousa Arzanipour , Rouzbeh Behnia , Reza Ebrahimi , Kaushik Dutta
Published on arXiv
2509.20324
Membership Inference Attack
OWASP ML Top 10 — ML04
Data Poisoning Attack
OWASP ML Top 10 — ML02
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Establishes the first structured taxonomy of RAG-specific threats, formally distinguishing document-level membership inference and knowledge base poisoning from vulnerabilities inherited from standard LLMs.
Retrieval-Augmented Generation (RAG) is an emerging approach in natural language processing that combines large language models (LLMs) with external document retrieval to produce more accurate and grounded responses. While RAG has shown strong potential in reducing hallucinations and improving factual consistency, it also introduces new privacy and security challenges that differ from those faced by traditional LLMs. Existing research has demonstrated that LLMs can leak sensitive information through training data memorization or adversarial prompts, and RAG systems inherit many of these vulnerabilities. At the same time, reliance of RAG on an external knowledge base opens new attack surfaces, including the potential for leaking information about the presence or content of retrieved documents, or for injecting malicious content to manipulate model behavior. Despite these risks, there is currently no formal framework that defines the threat landscape for RAG systems. In this paper, we address a critical gap in the literature by proposing, to the best of our knowledge, the first formal threat model for retrieval-RAG systems. We introduce a structured taxonomy of adversary types based on their access to model components and data, and we formally define key threat vectors such as document-level membership inference and data poisoning, which pose serious privacy and integrity risks in real-world deployments. By establishing formal definitions and attack models, our work lays the foundation for a more rigorous and principled understanding of privacy and security in RAG systems.
Key Contributions
- First formal threat model and adversary taxonomy for RAG systems, classifying adversaries by access to model components and data.
- Formal definitions of document-level membership inference attacks specific to the RAG retrieval corpus.
- Formal definitions of data poisoning threats targeting RAG knowledge bases, distinct from traditional LLM training-time poisoning.
🛡️ Threat Analysis
Formally defines data poisoning as a core RAG threat — injecting malicious content into the external knowledge base to corrupt retrieval and manipulate model outputs.
Explicitly defines document-level membership inference as a primary threat vector — an adversary determines whether a specific document is in the RAG knowledge base, a direct membership inference attack on the retrieval corpus.