attack 2025

ObliInjection: Order-Oblivious Prompt Injection Attack to LLM Agents with Multi-source Data

Reachal Wang , Yuqi Jia , Neil Zhenqiang Gong

2 citations · 55 references · arXiv

α

Published on arXiv

2512.09321

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

ObliInjection successfully hijacks LLM task completion even when only 1 out of 6–100 input segments is attacker-controlled, evaluated across 12 LLMs

ObliInjection (orderGCG)

Novel technique introduced


Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended task. In many applications and agents, the input data originates from multiple sources, with each source contributing a segment of the overall input. In these multi-source scenarios, an attacker may control only a subset of the sources and contaminate the corresponding segments, but typically does not know the order in which the segments are arranged within the input. Existing prompt injection attacks either assume that the entire input data comes from a single source under the attacker's control or ignore the uncertainty in the ordering of segments from different sources. As a result, their success is limited in domains involving multi-source data. In this work, we propose ObliInjection, the first prompt injection attack targeting LLM applications and agents with multi-source input data. ObliInjection introduces two key technical innovations: the order-oblivious loss, which quantifies the likelihood that the LLM will complete the attacker-chosen task regardless of how the clean and contaminated segments are ordered; and the orderGCG algorithm, which is tailored to minimize the order-oblivious loss and optimize the contaminated segments. Comprehensive experiments across three datasets spanning diverse application domains and twelve LLMs demonstrate that ObliInjection is highly effective, even when only one out of 6-100 segments in the input data is contaminated. Our code and data are available at: https://github.com/ReachalWang/ObliInjection.


Key Contributions

  • Order-oblivious loss function that quantifies attack success probability across all possible orderings of clean and contaminated input segments
  • orderGCG algorithm, a GCG-variant optimizer tailored to minimize the order-oblivious loss and craft effective injection payloads
  • First prompt injection attack explicitly designed for multi-source LLM inputs, shown effective when only 1 of 6–100 segments is attacker-controlled across 12 LLMs

🛡️ Threat Analysis

Input Manipulation Attack

The orderGCG algorithm is a gradient-based adversarial token optimization method (variant of GCG) that crafts adversarial input segments to hijack LLM behavior — classic token-level adversarial perturbation at inference time, warranting ML01 alongside LLM01.


Details

Domains
nlp
Model Types
llm
Threat Tags
white_boxinference_timetargeteddigital
Datasets
three application-domain datasets (unspecified in abstract)
Applications
llm agentsmulti-source llm applications