Secure Retrieval-Augmented Generation against Poisoning Attacks
Zirui Cheng 1, Jikai Sun 1, Anjun Gao 2, Yueyang Quan 3, Zhuqing Liu 3, Xiaohua Hu 4, Minghong Fang 2
Published on arXiv
2510.25025
Data Poisoning Attack
OWASP ML Top 10 — ML02
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
RAGuard detects and mitigates data poisoning attacks on RAG knowledge bases, including strong adaptive attacks, across large-scale datasets without requiring model fine-tuning.
RAGuard
Novel technique introduced
Large language models (LLMs) have transformed natural language processing (NLP), enabling applications from content generation to decision support. Retrieval-Augmented Generation (RAG) improves LLMs by incorporating external knowledge but also introduces security risks, particularly from data poisoning, where the attacker injects poisoned texts into the knowledge database to manipulate system outputs. While various defenses have been proposed, they often struggle against advanced attacks. To address this, we introduce RAGuard, a detection framework designed to identify poisoned texts. RAGuard first expands the retrieval scope to increase the proportion of clean texts, reducing the likelihood of retrieving poisoned content. It then applies chunk-wise perplexity filtering to detect abnormal variations and text similarity filtering to flag highly similar texts. This non-parametric approach enhances RAG security, and experiments on large-scale datasets demonstrate its effectiveness in detecting and mitigating poisoning attacks, including strong adaptive attacks.
Key Contributions
- RAGuard: a non-parametric detection framework that expands retrieval scope to dilute poisoned content and reduce its retrieval probability
- Chunk-wise perplexity filtering to identify statistically anomalous injected texts
- Text similarity filtering to flag near-duplicate or suspiciously similar poisoned documents, with demonstrated effectiveness against adaptive attacks
🛡️ Threat Analysis
The core attack defended against is data poisoning of the RAG external knowledge base — an adversary injects poisoned texts to corrupt the retrieval corpus and manipulate system outputs. RAGuard directly detects and mitigates this poisoning via non-parametric filtering methods.