Kyohei Shiomi

Papers in Database (1)

attack arXiv Aug 25, 2025 · Aug 2025

Tricking LLM-Based NPCs into Spilling Secrets

Kyohei Shiomi, Zhuotao Lian, Toru Nakanishi et al. · Hiroshima University

Attacks LLM-powered game NPCs via prompt injection to extract developer-embedded secrets from system prompts

Prompt Injection Sensitive Information Disclosure nlp
PDF