"Someone Hid It": Query-Agnostic Black-Box Attacks on LLM-Based Retrieval
Jiate Li 1, Defu Cao 1, Li Li 1, Wei Yang 1, Yuehan Qin 1, Chenxiao Yu 1, Tiannuo Yang 1, Ryan A. Rossi 2, Yan Liu 1, Xiyang Hu 3, Yue Zhao 1
Published on arXiv
2602.00364
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
A query-agnostic, black-box adversarial token injection attack successfully boosts or diminishes target documents in LLM-based retrieval systems by transferring adversarial tokens learned via surrogate LLMs without access to victim queries or model parameters.
Large language models (LLMs) have been serving as effective backbones for retrieval systems, including Retrieval-Augmentation-Generation (RAG), Dense Information Retriever (IR), and Agent Memory Retrieval. Recent studies have demonstrated that such LLM-based Retrieval (LLMR) is vulnerable to adversarial attacks, which manipulates documents by token-level injections and enables adversaries to either boost or diminish these documents in retrieval tasks. However, existing attack studies mainly (1) presume a known query is given to the attacker, and (2) highly rely on access to the victim model's parameters or interactions, which are hardly accessible in real-world scenarios, leading to limited validity. To further explore the secure risks of LLMR, we propose a practical black-box attack method that generates transferable injection tokens based on zero-shot surrogate LLMs without need of victim queries or victim models knowledge. The effectiveness of our attack raises such a robustness issue that similar effects may arise from benign or unintended document edits in the real world. To achieve our attack, we first establish a theoretical framework of LLMR and empirically verify it. Under the framework, we simulate the transferable attack as a min-max problem, and propose an adversarial learning mechanism that finds optimal adversarial tokens with learnable query samples. Our attack is validated to be effective on benchmark datasets across popular LLM retrievers.
Key Contributions
- Query-agnostic, black-box attack on LLM-based retrieval requiring no knowledge of victim queries or victim model parameters, using zero-shot surrogate LLMs for transferable adversarial tokens
- Theoretical framework for LLM-based retrieval (LLMR) that formalizes the attack as a min-max optimization problem with learnable query samples to find optimal adversarial injection tokens
- Empirical validation of transferability across popular LLM-based retrieval benchmarks, demonstrating practical real-world security risk
🛡️ Threat Analysis
Proposes gradient-optimized token-level adversarial injections into documents to manipulate retrieval model outputs at inference time — a direct adversarial input manipulation attack on LLM-based retrieval models using a min-max surrogate optimization framework.