Toru Nakanishi

Papers in Database (2)

attack arXiv Aug 25, 2025 · Aug 2025

Tricking LLM-Based NPCs into Spilling Secrets

Kyohei Shiomi, Zhuotao Lian, Toru Nakanishi et al. · Hiroshima University

Attacks LLM-powered game NPCs via prompt injection to extract developer-embedded secrets from system prompts

Prompt Injection Sensitive Information Disclosure nlp
PDF
attack arXiv Aug 25, 2025 · Aug 2025

Prompt-in-Content Attacks: Exploiting Uploaded Inputs to Hijack LLM Behavior

Zhuotao Lian, Weiyu Wang, Qingkui Zeng et al. · Hiroshima University · Hosei University +2 more

Demonstrates indirect prompt injection by embedding adversarial instructions in uploaded documents, hijacking LLM outputs across 7 major platforms

Prompt Injection Sensitive Information Disclosure nlp
PDF