Qianli Wang

h-index: 0 0 citations 0 papers (total)

Papers in Database (1)

attack arXiv Feb 11, 2026 · 7w ago

When Skills Lie: Hidden-Comment Injection in LLM Agents

Qianli Wang, Boyang Ma, Minghui Xu et al. · Shandong University

Demonstrates hidden-comment prompt injection in LLM agent Skill documents, invisible to humans but followed by models, triggering malicious tool calls

Prompt Injection Insecure Plugin Design nlp
PDF