defense 2026

Towards Unsupervised Adversarial Document Detection in Retrieval Augmented Generation Systems

Patrick Levi

0 citations

α

Published on arXiv

2603.17176

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Statistical outlier detection on generator internals can identify adversarial RAG contexts without knowing the attacker's target prompt

ACD

Novel technique introduced


Retrieval augmented generation systems have become an integral part of everyday life. Whether in internet search engines, email systems, or service chatbots, these systems are based on context retrieval and answer generation with large language models. With their spread, also the security vulnerabilities increase. Attackers become increasingly focused on these systems and various hacking approaches are developed. Manipulating the context documents is a way to persist attacks and make them affect all users. Therefore, detecting compromised, adversarial context documents early is crucial for security. While supervised approaches require a large amount of labeled adversarial contexts, we propose an unsupervised approach, being able to detect also zero day attacks. We conduct a preliminary study to show appropriate indicators for adversarial contexts. For that purpose generator activations, output embeddings, and an entropy-based uncertainty measure turn out as suitable, complementary quantities. With an elementary statistical outlier detection, we propose and compare their detection abilities. Furthermore, we show that the target prompt, which the attacker wants to manipulate, is not required for a successful detection. Moreover, our results indicate that a simple context summary generation might even be superior in finding manipulated contexts.


Key Contributions

  • Unsupervised adversarial context detection using generator activations, output embeddings, and entropy-based uncertainty measures
  • Shows that target prompt knowledge is not required for successful detection of adversarial RAG contexts
  • Demonstrates that simple context summarization may outperform direct analysis for detecting manipulated documents

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Applications
rag systemschatbotsemail assistantssearch engines