Jingyu Zhang

h-index: 4 64 citations 9 papers (total)

Papers in Database (2)

survey arXiv Oct 17, 2025 · Oct 2025

SoK: Taxonomy and Evaluation of Prompt Security in Large Language Models

Hanbin Hong, Shuya Feng, Nima Naderloui et al. · University of Connecticut · University of Alabama at Birmingham

SoK survey unifying LLM jailbreak taxonomy, threat models, evaluation toolkit, and the largest annotated jailbreak dataset

Input Manipulation Attack Prompt Injection nlp
2 citations 1 influentialPDF Code
benchmark arXiv Dec 30, 2025 · Dec 2025

Language Model Agents Under Attack: A Cross Model-Benchmark of Profit-Seeking Behaviors in Customer Service

Jingyu Zhang · University of Washington

Benchmarks profit-seeking prompt injection attacks on customer-service LLM agents across 10 domains and 5 models, finding payload splitting most effective

Prompt Injection nlp
PDF