attack 2026

PISmith: Reinforcement Learning-based Red Teaming for Prompt Injection Defenses

Chenlong Yin , Runpeng Geng , Yanting Wang , Jinyuan Jia

0 citations

α

Published on arXiv

2603.13026

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves highest attack success rates across 13 benchmarks against 8 state-of-the-art defenses, demonstrating that current prompt injection defenses cannot maintain utility while resisting adaptive attacks

PISmith

Novel technique introduced


Prompt injection poses serious security risks to real-world LLM applications, particularly autonomous agents. Although many defenses have been proposed, their robustness against adaptive attacks remains insufficiently evaluated, potentially creating a false sense of security. In this work, we propose PISmith, a reinforcement learning (RL)-based red-teaming framework that systematically assesses existing prompt-injection defenses by training an attack LLM to optimize injected prompts in a practical black-box setting, where the attacker can only query the defended LLM and observe its outputs. We find that directly applying standard GRPO to attack strong defenses leads to sub-optimal performance due to extreme reward sparsity -- most generated injected prompts are blocked by the defense, causing the policy's entropy to collapse before discovering effective attack strategies, while the rare successes cannot be learned effectively. In response, we introduce adaptive entropy regularization and dynamic advantage weighting to sustain exploration and amplify learning from scarce successes. Extensive evaluation on 13 benchmarks demonstrates that state-of-the-art prompt injection defenses remain vulnerable to adaptive attacks. We also compare PISmith with 7 baselines across static, search-based, and RL-based attack categories, showing that PISmith consistently achieves the highest attack success rates. Furthermore, PISmith achieves strong performance in agentic settings on InjecAgent and AgentDojo against both open-source and closed-source LLMs (e.g., GPT-4o-mini and GPT-5-nano). Our code is available at https://github.com/albert-y1n/PISmith.


Key Contributions

  • RL-based red-teaming framework (PISmith) that trains an attack LLM to optimize prompt injection attacks against defended LLMs in black-box settings
  • Adaptive entropy regularization and dynamic advantage weighting to overcome extreme reward sparsity when attacking strong defenses
  • Systematic evaluation across 13 benchmarks and 8 defenses, demonstrating that state-of-the-art prompt injection defenses remain vulnerable to adaptive attacks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
InjecAgentAgentDojo
Applications
autonomous agentsretrieval-augmented generationquestion answeringlong-context llm applications