Language Model Agents Under Attack: A Cross Model-Benchmark of Profit-Seeking Behaviors in Customer Service
Published on arXiv
2512.24415
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Airline support is the most exploitable domain (~56% attack success rate) and payload splitting is the most consistently effective prompt injection technique across five widely-used LLMs.
Customer-service LLM agents increasingly make policy-bound decisions (refunds, rebooking, billing disputes), but the same ``helpful'' interaction style can be exploited: a small fraction of users can induce unauthorized concessions, shifting costs to others and eroding trust in agentic workflows. We present a cross-domain benchmark of profit-seeking direct prompt injection in customer-service interactions, spanning 10 service domains and 100 realistic attack scripts grouped into five technique families. Across five widely used models under a unified rubric with uncertainty reporting, attacks are highly domain-dependent (airline support is most exploitable) and technique-dependent (payload splitting is most consistently effective). We release data and evaluation code to support reproducible auditing and to inform the design of oversight and recovery workflows for trustworthy, human centered agent interfaces.
Key Contributions
- Cross-domain benchmark of 100 profit-seeking prompt injection attack scripts across 10 customer-service domains, organized into five technique families
- Systematic evaluation of first-turn susceptibility across GPT-5, DeepSeek v3.2, Claude Opus 4.1, Gemini 2.5 Pro, and GPT-4o under a unified rubric with uncertainty reporting and dual LLM judges
- Finding that airline support is most exploitable (~56% success rate), payload splitting is the most consistently effective technique (Spearman ρ=0.90 across judges), and DeepSeek has the highest adjusted attack success probability (0.265±0.056)