SafeSearch: Automated Red-Teaming of LLM-Based Search Agents
Jianshuo Dong 1, Sheng Guo 1, Hao Wang 2, Xun Chen 3, Zhuotao Liu 1, Tianwei Zhang 4, Ke Xu 1, Minlie Huang 1, Han Qiu 1
Published on arXiv
2509.23694
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
GPT-4.1-mini in a search-workflow setting reaches 90.5% attack success rate against adversarial search results; hard-to-verify misinformation poses the greatest threat across risk types and reminder prompting provides only marginal protection.
SafeSearch
Novel technique introduced
Search agents connect LLMs to the Internet, enabling them to access broader and more up-to-date information. However, this also introduces a new threat surface: unreliable search results can mislead agents into producing unsafe outputs. Real-world incidents and our two in-the-wild observations show that such failures can occur in practice. To study this threat systematically, we propose SafeSearch, an automated red-teaming framework that is scalable, cost-efficient, and lightweight, enabling harmless safety evaluation of search agents. Using this, we generate 300 test cases spanning five risk categories (e.g., misinformation and prompt injection) and evaluate three search agent scaffolds across 17 representative LLMs. Our results reveal substantial vulnerabilities in LLM-based search agents, with the highest ASR reaching 90.5% for GPT-4.1-mini in a search-workflow setting. Moreover, we find that common defenses, such as reminder prompting, offer limited protection. Overall, SafeSearch provides a practical way to measure and improve the safety of LLM-based search agents. Our codebase and test cases are publicly available: https://github.com/jianshuod/SafeSearch.
Key Contributions
- SafeSearch: a scalable, automated red-teaming framework that simulates unreliable search results via LLM-generated adversarial websites injected into real search results, avoiding ethical issues of live SEO manipulation
- A 300-test-case benchmark spanning five risk categories (harmful output, indirect prompt injection, ad promotion, misinformation, bias) evaluated across 17 LLMs and three agent scaffolds
- Empirical findings showing widespread vulnerability (up to 90.5% ASR) and that common defenses like reminder prompting offer minimal protection, while reasoning models and deep-research scaffolds show greater resilience
🛡️ Threat Analysis
The framework simulates adversarial SEO/content injection — crafted unreliable websites are injected into authentic search results to manipulate LLM agent outputs. Per guidelines, adversarial content manipulation of LLM-integrated systems (adversarial SEO poisoning for LLM search engines, document injection for RAG) is ML01 + LLM01.