defense arXiv Jan 1, 2025 · Jan 2025
Huichi Zhou, Kin-Hei Lee, Zhonghao Zhan et al. · Imperial College London · Peking University +2 more
Defends RAG systems against corpus poisoning via two-stage cluster filtering and LLM self-assessment to block malicious retrieved documents
Data Poisoning Attack Prompt Injection nlp
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by integrating external knowledge sources, enabling more accurate and contextually relevant responses tailored to user queries. These systems, however, remain susceptible to corpus poisoning attacks, which can severely impair the performance of LLMs. To address this challenge, we propose TrustRAG, a robust framework that systematically filters malicious and irrelevant content before it is retrieved for generation. Our approach employs a two-stage defense mechanism. The first stage implements a cluster filtering strategy to detect potential attack patterns. The second stage employs a self-assessment process that harnesses the internal capabilities of LLMs to detect malicious documents and resolve inconsistencies. TrustRAG provides a plug-and-play, training-free module that integrates seamlessly with any open- or closed-source language model. Extensive experiments demonstrate that TrustRAG delivers substantial improvements in retrieval accuracy, efficiency, and attack resistance.
llm transformer Imperial College London · Peking University · University of North Carolina at Chapel Hill +1 more