attack 2026

PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems

Haozhen Wang 1, Haoyue Liu 1, Jionghao Zhu 1, Zhichao Wang 1, Yongxin Guo 2, Xiaoying Tang 1

0 citations

α

Published on arXiv

2603.25164

Input Manipulation Attack

OWASP ML Top 10 — ML01

Data Poisoning Attack

OWASP ML Top 10 — ML02

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 98.125% average attack success rate across benchmarks, improving 4-16% over PoisonedRAG while maintaining high retrieval precision

PIDP-Attack

Novel technique introduced


Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of applications. However, their practical deployment is often hindered by issues such as outdated knowledge and the tendency to generate hallucinations. To address these limitations, Retrieval-Augmented Generation (RAG) systems have been introduced, enhancing LLMs with external, up-to-date knowledge sources. Despite their advantages, RAG systems remain vulnerable to adversarial attacks, with data poisoning emerging as a prominent threat. Existing poisoning-based attacks typically require prior knowledge of the user's specific queries, limiting their flexibility and real-world applicability. In this work, we propose PIDP-Attack, a novel compound attack that integrates prompt injection with database poisoning in RAG. By appending malicious characters to queries at inference time and injecting a limited number of poisoned passages into the retrieval database, our method can effectively manipulate LLM response to arbitrary query without prior knowledge of the user's actual query. Experimental evaluations across three benchmark datasets (Natural Questions, HotpotQA, MS-MARCO) and eight LLMs demonstrate that PIDP-Attack consistently outperforms the original PoisonedRAG. Specifically, our method improves attack success rates by 4% to 16% on open-domain QA tasks while maintaining high retrieval precision, proving that the compound attack strategy is both necessary and highly effective.


Key Contributions

  • Novel compound attack combining prompt injection and database poisoning in RAG systems
  • Query-agnostic attack that manipulates LLM responses without prior knowledge of user queries
  • Achieves 4-16% improvement in attack success rate over PoisonedRAG baseline across multiple datasets

🛡️ Threat Analysis

Input Manipulation Attack

Includes query-path manipulation via prompt injection at inference time — appending malicious characters to user queries to hijack LLM behavior.

Data Poisoning Attack

Includes corpus-path manipulation via database poisoning — injecting malicious passages into the RAG retrieval database to corrupt training/knowledge data.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeted
Datasets
Natural QuestionsHotpotQAMS-MARCO
Applications
question answeringretrieval-augmented generation