SearchAttack: Red-Teaming LLMs against Knowledge-to-Action Threats under Online Web Search
Yu Yan 1,2, Sheng Sun 1, Mingfeng Li 3, Zheming Yang 1, Chiwei Zhu 4, Fei Ma 5, Benfeng Xu 4, Min Liu 1,2, Qi Li 6
1 Institute of Computing Technology, Chinese Academy of Sciences
2 University of Chinese Academy of Sciences
3 People’s Public Security University of China
4 University of Science and Technology of China
5 Guangdong Laboratory of Artificial Intelligence and Digital Economy
Published on arXiv
2601.04093
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
SearchAttack effectively elicits harmful outputs from search-augmented LLMs, and LLMs without web search can still be steered into harmful content via information-seeking stereotypical behaviors.
SearchAttack
Novel technique introduced
Recently, people have suffered from LLM hallucination and have become increasingly aware of the reliability gap of LLMs in open and knowledge-intensive tasks. As a result, they have increasingly turned to search-augmented LLMs to mitigate this issue. However, LLM-driven search also becomes an attractive target for misuse. Once the returned content directly contains targeted, ready-to-use harmful instructions or takeaways for users, it becomes difficult to withdraw or undo such exposure. To investigate LLMs' unsafe search behavior issues, we first propose \textbf{\textit{SearchAttack}} for red-teaming, which (1) rephrases harmful semantics via dense and benign knowledge to evade direct in-context decoding, thus eliciting unsafe information retrieval, (2) stress-tests LLMs' reward-chasing bias by steering them to synthesize unsafe retrieved content. We also curate an emergent, domain-specific illicit activity benchmark for search-based threat assessment, and introduce a fact-checking framework to ground and quantify harm in both offline and online attack settings. Extensive experiments are conducted to red-team the search-augmented LLMs for responsible vulnerability assessment. Empirically, SearchAttack demonstrates strong effectiveness in attacking these systems. We also find that LLMs without web search can still be steered into harmful content output due to their information-seeking stereotypical behaviors.
Key Contributions
- SearchAttack attack framework that rephrases harmful queries using dense benign knowledge to evade safety filters and elicit unsafe information retrieval from search-augmented LLMs
- Stress-testing attack that exploits LLMs' reward-chasing bias to steer synthesis of unsafe retrieved content
- Domain-specific illicit activity benchmark and fact-checking framework for quantifying harm in offline and online search-augmented LLM attack settings