attack arXiv Aug 25, 2025 · Aug 2025
Kyohei Shiomi, Zhuotao Lian, Toru Nakanishi et al. · Hiroshima University
Attacks LLM-powered game NPCs via prompt injection to extract developer-embedded secrets from system prompts
Prompt Injection Sensitive Information Disclosure nlp
Large Language Models (LLMs) are increasingly used to generate dynamic dialogue for game NPCs. However, their integration raises new security concerns. In this study, we examine whether adversarial prompt injection can cause LLM-based NPCs to reveal hidden background secrets that are meant to remain undisclosed.
llm Hiroshima University
attack arXiv Aug 25, 2025 · Aug 2025
Zhuotao Lian, Weiyu Wang, Qingkui Zeng et al. · Hiroshima University · Hosei University +2 more
Demonstrates indirect prompt injection by embedding adversarial instructions in uploaded documents, hijacking LLM outputs across 7 major platforms
Prompt Injection Sensitive Information Disclosure nlp
Large Language Models (LLMs) are widely deployed in applications that accept user-submitted content, such as uploaded documents or pasted text, for tasks like summarization and question answering. In this paper, we identify a new class of attacks, prompt in content injection, where adversarial instructions are embedded in seemingly benign inputs. When processed by the LLM, these hidden prompts can manipulate outputs without user awareness or system compromise, leading to biased summaries, fabricated claims, or misleading suggestions. We demonstrate the feasibility of such attacks across popular platforms, analyze their root causes including prompt concatenation and insufficient input isolation, and discuss mitigation strategies. Our findings reveal a subtle yet practical threat in real-world LLM workflows.
llm Hiroshima University · Hosei University · Tongling University +1 more