α

Published on arXiv

2601.04093

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SearchAttack effectively elicits harmful outputs from search-augmented LLMs, and LLMs without web search can still be steered into harmful content via information-seeking stereotypical behaviors.

SearchAttack

Novel technique introduced


Recently, people have suffered from LLM hallucination and have become increasingly aware of the reliability gap of LLMs in open and knowledge-intensive tasks. As a result, they have increasingly turned to search-augmented LLMs to mitigate this issue. However, LLM-driven search also becomes an attractive target for misuse. Once the returned content directly contains targeted, ready-to-use harmful instructions or takeaways for users, it becomes difficult to withdraw or undo such exposure. To investigate LLMs' unsafe search behavior issues, we first propose \textbf{\textit{SearchAttack}} for red-teaming, which (1) rephrases harmful semantics via dense and benign knowledge to evade direct in-context decoding, thus eliciting unsafe information retrieval, (2) stress-tests LLMs' reward-chasing bias by steering them to synthesize unsafe retrieved content. We also curate an emergent, domain-specific illicit activity benchmark for search-based threat assessment, and introduce a fact-checking framework to ground and quantify harm in both offline and online attack settings. Extensive experiments are conducted to red-team the search-augmented LLMs for responsible vulnerability assessment. Empirically, SearchAttack demonstrates strong effectiveness in attacking these systems. We also find that LLMs without web search can still be steered into harmful content output due to their information-seeking stereotypical behaviors.


Key Contributions

  • SearchAttack attack framework that rephrases harmful queries using dense benign knowledge to evade safety filters and elicit unsafe information retrieval from search-augmented LLMs
  • Stress-testing attack that exploits LLMs' reward-chasing bias to steer synthesis of unsafe retrieved content
  • Domain-specific illicit activity benchmark and fact-checking framework for quantifying harm in offline and online search-augmented LLM attack settings

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
custom illicit activity benchmark (domain-specific)
Applications
search-augmented llmsretrieval-augmented generation (rag)