Confundo: Learning to Generate Robust Poison for Practical RAG Systems
Haoyang Hu 1, Zhejun Jiang 1, Yueming Lyu 2, Junyuan Zhang 1, Yi Liu 3, Ka-Ho Chow 1
Published on arXiv
2602.06616
Data Poisoning Attack
OWASP ML Top 10 — ML02
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Confundo consistently outperforms existing purpose-built RAG poisoning attacks by large margins across datasets and RAG configurations, maintaining high attack success even under chunking, query variation, and active defenses.
Confundo
Novel technique introduced
Retrieval-augmented generation (RAG) is increasingly deployed in real-world applications, where its reference-grounded design makes outputs appear trustworthy. This trust has spurred research on poisoning attacks that craft malicious content, inject it into knowledge sources, and manipulate RAG responses. However, when evaluated in practical RAG systems, existing attacks suffer from severely degraded effectiveness. This gap stems from two overlooked realities: (i) content is often processed before use, which can fragment the poison and weaken its effect, and (ii) users often do not issue the exact queries anticipated during attack design. These factors can lead practitioners to underestimate risks and develop a false sense of security. To better characterize the threat to practical systems, we present Confundo, a learning-to-poison framework that fine-tunes a large language model as a poison generator to achieve high effectiveness, robustness, and stealthiness. Confundo provides a unified framework supporting multiple attack objectives, demonstrated by manipulating factual correctness, inducing biased opinions, and triggering hallucinations. By addressing these overlooked challenges, Confundo consistently outperforms a wide range of purpose-built attacks across datasets and RAG configurations by large margins, even in the presence of defenses. Beyond exposing vulnerabilities, we also present a defensive use case that protects web content from unauthorized incorporation into RAG systems via scraping, with no impact on user experience.
Key Contributions
- Identifies two overlooked real-world failure modes of existing RAG poisoning attacks: document chunking fragmentation and query variation, exposing a false sense of security in prior evaluations.
- Proposes Confundo, a learning-to-poison framework that fine-tunes an LLM as a poison generator to produce chunk-robust, query-generalizable, stealthy poison text supporting multiple attack objectives.
- Demonstrates a defensive dual-use case that injects imperceptible poison into web content to prevent unauthorized RAG ingestion via scraping, with no impact on human readers.
🛡️ Threat Analysis
Core attack mechanism is injecting adversarially crafted documents into RAG knowledge bases (data injection/poisoning of the retrieval corpus) to manipulate downstream system behavior.