attack 2025

ImportSnare: Directed "Code Manual" Hijacking in Retrieval-Augmented Code Generation

Kai Ye , Liangcai Su , Chenxiong Qian

0 citations

α

Published on arXiv

2509.07941

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

ImportSnare achieves over 50% attack success rate on popular libraries (matplotlib, seaborn) and remains effective even at a 0.01% poisoning ratio in the RAG corpus.

ImportSnare

Novel technique introduced


Code generation has emerged as a pivotal capability of Large Language Models(LLMs), revolutionizing development efficiency for programmers of all skill levels. However, the complexity of data structures and algorithmic logic often results in functional deficiencies and security vulnerabilities in generated code, reducing it to a prototype requiring extensive manual debugging. While Retrieval-Augmented Generation (RAG) can enhance correctness and security by leveraging external code manuals, it simultaneously introduces new attack surfaces. In this paper, we pioneer the exploration of attack surfaces in Retrieval-Augmented Code Generation (RACG), focusing on malicious dependency hijacking. We demonstrate how poisoned documentation containing hidden malicious dependencies (e.g., matplotlib_safe) can subvert RACG, exploiting dual trust chains: LLM reliance on RAG and developers' blind trust in LLM suggestions. To construct poisoned documents, we propose ImportSnare, a novel attack framework employing two synergistic strategies: 1)Position-aware beam search optimizes hidden ranking sequences to elevate poisoned documents in retrieval results, and 2)Multilingual inductive suggestions generate jailbreaking sequences to manipulate LLMs into recommending malicious dependencies. Through extensive experiments across Python, Rust, and JavaScript, ImportSnare achieves significant attack success rates (over 50% for popular libraries such as matplotlib and seaborn) in general, and is also able to succeed even when the poisoning ratio is as low as 0.01%, targeting both custom and real-world malicious packages. Our findings reveal critical supply chain risks in LLM-powered development, highlighting inadequate security alignment for code generation tasks. To support future research, we will release the multilingual benchmark suite and datasets. The project homepage is https://importsnare.github.io.


Key Contributions

  • ImportSnare attack framework combining position-aware beam search (to rank poisoned docs higher in retrieval) with jailbreaking sequences (to override LLM safety alignment) for malicious dependency hijacking in RACG systems
  • Demonstrates effective attacks at poisoning ratios as low as 0.01% across Python, Rust, and JavaScript, targeting both custom and real-world malicious packages
  • Multilingual benchmark suite and datasets for evaluating RACG security, exposing inadequate LLM alignment for code generation tasks

🛡️ Threat Analysis

Input Manipulation Attack

The 'position-aware beam search' component crafts and optimizes poisoned documents to rank higher in RAG retrieval results — this is adversarial document injection for RAG systems, explicitly matching the ML01 guideline on adversarial content manipulation of LLM-integrated systems (adversarial SEO poisoning for RAG).


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeted
Datasets
custom multilingual RACG benchmark (Python, Rust, JavaScript)matplotlibseaborn
Applications
rag-based code generationllm code assistantsai-powered developer tools