attack 2026

Exposing Long-Tail Safety Failures in Large Language Models through Efficient Diverse Response Sampling

Suvadeep Hajra , Palash Nandi , Tanmoy Chakraborty

0 citations

α

Published on arXiv

2603.14355

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves attack success rates comparable to large-scale IID sampling while using only 8-29% of computational cost; improves success rates by 26-40% over IID sampling and Diverse Beam Search in limited-response settings

Progressive Diverse Population Sampling (PDPS)

Novel technique introduced


Safety tuning through supervised fine-tuning and reinforcement learning from human feedback has substantially improved the robustness of large language models (LLMs). However, it often suppresses rather than eliminates unsafe behaviors, leaving rare but critical failures hidden in the long tail of the output distribution. While most red-teaming work emphasizes adversarial prompt search (input-space optimization), we show that safety failures can also be systematically exposed through diverse response generation (output-space exploration) for a fixed safety-critical prompt, where increasing the number and diversity of sampled responses can drive jailbreak success rates close to unity. To efficiently uncover such failures, we propose Progressive Diverse Population Sampling (PDPS), which combines stochastic token-level sampling with diversity-aware selection to explore a large candidate pool of responses and retain a compact, semantically diverse subset. Across multiple jailbreak benchmarks and open-source LLMs, PDPS achieves attack success rates comparable to large-scale IID sampling while using only 8% to 29% of the computational cost. Under limited-response settings, it improves success rates by 26% to 40% over IID sampling and Diverse Beam Search. Furthermore, responses generated by PDPS exhibit both a higher number and greater diversity of unsafe outputs, demonstrating its effectiveness in uncovering a broader range of failures.


Key Contributions

  • Demonstrates that diverse response sampling can expose safety failures hidden in the long tail of LLM output distributions
  • Proposes Progressive Diverse Population Sampling (PDPS) that achieves comparable jailbreak success rates to large-scale IID sampling with only 8-29% of computational cost
  • Shows 26-40% improvement in attack success rates over baselines in limited-response settings

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
jailbreak benchmarks
Applications
llm safety auditingred-teamingjailbreak detection