Ying Chen

h-index: 1 3 citations 4 papers (total)

Papers in Database (2)

defense arXiv Nov 13, 2025 · Nov 2025

PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt Sanitization

Runpeng Geng, Yanting Wang, Chenlong Yin et al. · The Pennsylvania State University

Defends long-context LLMs against prompt injection by sanitizing high-attention tokens that drive injected instruction-following behavior

Prompt Injection nlp
3 citations 1 influentialPDF Code
defense arXiv Oct 15, 2025 · Oct 2025

PIShield: Detecting Prompt Injection Attacks via Intrinsic LLM Features

Wei Zou, Yupei Liu, Yanting Wang et al. · Pennsylvania State University · Duke University

Detects prompt injection in LLM applications using residual-stream representations and a lightweight linear classifier

Prompt Injection nlp
PDF