defense 2025

Knowing When Not to Answer: Lightweight KB-Aligned OOD Detection for Safe RAG

Ilias Triantafyllopoulos 1, Renyi Qu 2, Salvatore Giorgi 3, Brenda Curtis 3, Lyle H. Ungar 4, João Sedoc 1

0 citations

α

Published on arXiv

2508.02296

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Low-dimensional PCA-based detectors achieve competitive OOD detection performance across 16 domains while being faster, cheaper, and more interpretable than prompted LLM-based domain judges.

KB-aligned PCA OOD Gate (EVR / t-test ranking)

Novel technique introduced


Retrieval-Augmented Generation (RAG) systems are increasingly deployed in high-stakes domains, where safety depends not only on how a system answers, but also on whether a query should be answered given a knowledge base (KB). Out-of-domain (OOD) queries can cause dense retrieval to surface weakly related context and lead the generator to produce fluent but unjustified responses. We study lightweight, KB-aligned OOD detection as an always-on gate for RAG systems. Our approach applies PCA to KB embeddings and scores queries in a compact subspace selected either by explained-variance retention (EVR) or by a separability-driven t-test ranking. We evaluate geometric semantic-search rules and lightweight classifiers across 16 domains, including high-stakes COVID-19 and Substance Use KBs, and stress-test robustness using both LLM-generated attacks and an in-the-wild 4chan attack. We find that low-dimensional detectors achieve competitive OOD performance while being faster, cheaper, and more interpretable than prompted LLM-based judges. Finally, human and LLM-based evaluations show that OOD queries primarily degrade the relevance of RAG outputs, showing the need for efficient external OOD detection to maintain safe, in-scope behavior.


Key Contributions

  • KB-aligned PCA-based OOD detector that projects queries into a compact subspace derived from KB document embeddings, with two component selection strategies: explained-variance retention (EVR) and separability-driven t-test ranking
  • Systematic evaluation across 16 domains and four datasets — including COVID-19 and Substance Use high-stakes KBs — stress-tested against both LLM-generated adversarial queries and an in-the-wild 4chan attack dataset
  • End-to-end RAG evaluation showing that OOD queries primarily degrade output relevance and that lightweight external OOD detection outperforms prompted LLM judges in speed, cost, and interpretability

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
COVID-19 KBSubstance Use KB4chan adversarial queriesLLM-generated adversarial queries
Applications
retrieval-augmented generationclinical decision supporthigh-stakes domain assistants