Chunhua Su

Papers in Database (1)

attack arXiv Aug 25, 2025 · Aug 2025

Prompt-in-Content Attacks: Exploiting Uploaded Inputs to Hijack LLM Behavior

Zhuotao Lian, Weiyu Wang, Qingkui Zeng et al. · Hiroshima University · Hosei University +2 more

Demonstrates indirect prompt injection by embedding adversarial instructions in uploaded documents, hijacking LLM outputs across 7 major platforms

Prompt Injection Sensitive Information Disclosure nlp
PDF