attack 2025

Prompt-in-Content Attacks: Exploiting Uploaded Inputs to Hijack LLM Behavior

Zhuotao Lian 1, Weiyu Wang 2, Qingkui Zeng 3, Toru Nakanishi 1, Teruaki Kitasuka 1, Chunhua Su 4

0 citations

α

Published on arXiv

2508.19287

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Most tested LLM platforms fail to block prompt-in-content injection attacks, with Grok 3 and DeepSeek R1 executing every embedded adversarial instruction without resistance.

Prompt-in-Content Injection

Novel technique introduced


Large Language Models (LLMs) are widely deployed in applications that accept user-submitted content, such as uploaded documents or pasted text, for tasks like summarization and question answering. In this paper, we identify a new class of attacks, prompt in content injection, where adversarial instructions are embedded in seemingly benign inputs. When processed by the LLM, these hidden prompts can manipulate outputs without user awareness or system compromise, leading to biased summaries, fabricated claims, or misleading suggestions. We demonstrate the feasibility of such attacks across popular platforms, analyze their root causes including prompt concatenation and insufficient input isolation, and discuss mitigation strategies. Our findings reveal a subtle yet practical threat in real-world LLM workflows.


Key Contributions

  • Formalizes prompt-in-content injection as a new attack class where adversarial instructions embedded in uploaded documents hijack LLM behavior during normal user tasks
  • Designs and evaluates four attack variants (task suppression, output substitution, behavioral redirection, framing manipulation) across seven major LLM platforms including ChatGPT 4o, Claude Sonnet, Grok 3, and DeepSeek R1
  • Demonstrates a covert information exfiltration extension where embedded prompts encode chat history contents into attacker-controlled URLs, and analyzes root causes (prompt concatenation, insufficient input isolation) with mitigation strategies

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeteddigital
Datasets
ChatGPT 4oClaude SonnetGrok 3DeepSeek R1
Applications
document summarizationquestion answeringllm-based document processing