attack arXiv Oct 13, 2025 · Oct 2025
Vasilije Stambolic, Aritra Dhar, Lukas Cavigelli · EPFL · Huawei Technologies Switzerland AG
Inserts hidden UTF characters into RAG queries and code repositories to redirect retrieval toward attacker-controlled vulnerable code snippets
Input Manipulation Attack Prompt Injection nlp
Retrieval-Augmented Generation (RAG) increases the reliability and trustworthiness of the LLM response and reduces hallucination by eliminating the need for model retraining. It does so by adding external data into the LLM's context. We develop a new class of black-box attack, RAG-Pull, that inserts hidden UTF characters into queries or external code repositories, redirecting retrieval toward malicious code, thereby breaking the models' safety alignment. We observe that query and code perturbations alone can shift retrieval toward attacker-controlled snippets, while combined query-and-target perturbations achieve near-perfect success. Once retrieved, these snippets introduce exploitable vulnerabilities such as remote code execution and SQL injection. RAG-Pull's minimal perturbations can alter the model's safety alignment and increase preference towards unsafe code, therefore opening up a new class of attacks on LLMs.
llm transformer EPFL · Huawei Technologies Switzerland AG