Adversarial Hubness Detector: Detecting Hubness Poisoning in Retrieval-Augmented Generation Systems
Idan Habler 1,2, Vineeth Sai Narajala 1,2, Stav Koren 3, Amy Chang 1, Tiffany Saade 1
Published on arXiv
2602.22427
Data Poisoning Attack
OWASP ML Top 10 — ML02
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves 90% recall at a 0.2% alert budget and 100% recall at 0.4%, with domain-scoped scanning recovering 100% of targeted attacks that evaded global detection on adversarial hub benchmarks.
hubscan
Novel technique introduced
Retrieval-Augmented Generation (RAG) systems are essential to contemporary AI applications, allowing large language models to obtain external knowledge via vector similarity search. Nevertheless, these systems encounter a significant security flaw: hubness - items that frequently appear in the top-$k$ retrieval results for a disproportionately high number of varied queries. These hubs can be exploited to introduce harmful content, alter search rankings, bypass content filtering, and decrease system performance. We introduce hubscan, an open-source security scanner that evaluates vector indices and embeddings to identify hubs in RAG systems. Hubscan presents a multi-detector architecture that integrates: (1) robust statistical hubness detection utilizing median/Median Absolute Deviation (MAD)-based z-scores, (2) cluster spread analysis to assess cross-cluster retrieval patterns, (3) stability testing under query perturbations, and (4) domain-aware and modality-aware detection for category-specific and cross-modal attacks. Our solution accommodates several vector databases (FAISS, Pinecone, Qdrant, Weaviate) and offers versatile retrieval techniques, including vector similarity, hybrid search, and lexical matching with reranking capabilities. We evaluate hubscan on Food-101, MS-COCO, and FiQA adversarial hubness benchmarks constructed using state-of-the-art gradient-optimized and centroid-based hub generation methods. Hubscan achieves 90% recall at a 0.2% alert budget and 100% recall at 0.4%, with adversarial hubs ranking above the 99.8th percentile. In testing, domain-scoped scanning recovered 100% of targeted attacks that evaded global detection. Production validation on 1M real web documents from MS MARCO demonstrates significant score separation between clean documents and adversarial content.
Key Contributions
- hubscan: an open-source multi-detector security scanner for identifying adversarial hubs in RAG vector indices, supporting FAISS, Pinecone, Qdrant, and Weaviate
- Multi-architecture detection combining MAD-based statistical hubness scoring, cluster spread analysis, perturbation stability testing, and domain/modality-aware detection
- Evaluation on adversarial benchmarks (Food-101, MS-COCO, FiQA) achieving 90% recall at 0.2% alert budget and 100% recall at 0.4%, validated on 1M MS MARCO documents
🛡️ Threat Analysis
The attack involves injecting adversarially crafted documents into the RAG retrieval corpus (vector database), corrupting the data the system relies on to produce outputs — a form of retrieval data poisoning. The paper explicitly uses the term 'hubness poisoning' and defends against gradient-optimized and centroid-based adversarial content injection into the knowledge base.