Jinghui Chen

h-index: 10 488 citations 27 papers (total)

Papers in Database (2)

defense CCS Sep 26, 2025 · Sep 2025

You Can't Steal Nothing: Mitigating Prompt Leakages in LLMs via System Vectors

Bochuan Cao, Changjiang Li, Yuanpu Cao et al. · The Pennsylvania State University · Palo Alto Networks +1 more

Attacks GPT-4o/Claude to extract system prompts, then defends with SysVec encoding prompts as hidden internal vectors

Sensitive Information Disclosure nlp
5 citations 1 influentialPDF
attack arXiv Nov 23, 2025 · Nov 2025

TASO: Jailbreak LLMs via Alternative Template and Suffix Optimization

Yanting Wang, Runpeng Geng, Jinghui Chen et al. · Pennsylvania State University

Combines gradient-based suffix optimization with semantic template optimization to jailbreak LLMs more effectively than either alone

Input Manipulation Attack Prompt Injection nlp
PDF