benchmark 2026

How Few-shot Demonstrations Affect Prompt-based Defenses Against LLM Jailbreak Attacks

Yanshu Wang , Shuaishuai Yang , Jingjing He , Tong Yang

0 citations · 83 references · arXiv (Cornell University)

α

Published on arXiv

2602.04294

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Few-shot demonstrations enhance Role-Oriented Prompt defenses by up to 4.5% by reinforcing role identity, but degrade Task-Oriented Prompt defenses by up to 21.2% by distracting attention from task instructions.


Large Language Models (LLMs) face increasing threats from jailbreak attacks that bypass safety alignment. While prompt-based defenses such as Role-Oriented Prompts (RoP) and Task-Oriented Prompts (ToP) have shown effectiveness, the role of few-shot demonstrations in these defense strategies remains unclear. Prior work suggests that few-shot examples may compromise safety, but lacks investigation into how few-shot interacts with different system prompt strategies. In this paper, we conduct a comprehensive evaluation on multiple mainstream LLMs across four safety benchmarks (AdvBench, HarmBench, SG-Bench, XSTest) using six jailbreak attack methods. Our key finding reveals that few-shot demonstrations produce opposite effects on RoP and ToP: few-shot enhances RoP's safety rate by up to 4.5% through reinforcing role identity, while it degrades ToP's effectiveness by up to 21.2% through distracting attention from task instructions. Based on these findings, we provide practical recommendations for deploying prompt-based defenses in real-world LLM applications.


Key Contributions

  • First systematic study showing few-shot demonstrations produce opposite effects on RoP vs. ToP defenses: +4.5% for RoP but up to -21.2% for ToP
  • Mathematical framework using Bayesian in-context learning and attention analysis to explain the divergent few-shot interactions
  • Practical deployment recommendations for prompt-based defenses across four safety benchmarks and six jailbreak methods

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
AdvBenchHarmBenchSG-BenchXSTest
Applications
llm safetyjailbreak defenseprompt engineering