attack 2025

Jailbreaking in the Haystack

Rishi Rajesh Shah , Chen Henry Wu , Shashwat Saxena , Ziqian Zhong , Alexander Robey , Aditi Raghunathan

2 citations · 23 references · arXiv

α

Published on arXiv

2511.04707

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

NINJA significantly increases attack success rates on HarmBench across LLaMA, Qwen, Mistral, and Gemini while being low-resource, transferable, and harder to detect than prior jailbreaks.

NINJA (Needle-in-haystack Jailbreak Attack)

Novel technique introduced


Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like computer-use agents. Yet, the safety implications of these extended contexts remain unclear. To bridge this gap, we introduce NINJA (short for Needle-in-haystack jailbreak attack), a method that jailbreaks aligned LMs by appending benign, model-generated content to harmful user goals. Critical to our method is the observation that the position of harmful goals play an important role in safety. Experiments on standard safety benchmark, HarmBench, show that NINJA significantly increases attack success rates across state-of-the-art open and proprietary models, including LLaMA, Qwen, Mistral, and Gemini. Unlike prior jailbreaking methods, our approach is low-resource, transferable, and less detectable. Moreover, we show that NINJA is compute-optimal -- under a fixed compute budget, increasing context length can outperform increasing the number of trials in best-of-N jailbreak. These findings reveal that even benign long contexts -- when crafted with careful goal positioning -- introduce fundamental vulnerabilities in modern LMs.


Key Contributions

  • NINJA attack: appends large volumes of benign model-generated content around harmful goals to exploit positional safety blindspots in long-context LLMs
  • Demonstrates that harmful goal position within long contexts is a critical, underexplored factor in LLM safety alignment
  • Shows NINJA is compute-optimal — under a fixed compute budget, increasing context length outperforms best-of-N sampling

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
HarmBench
Applications
conversational ailong-context llm agentscomputer-use agents