Deep Research with Open-Domain Evaluation and Multi-Stage Guardrails for Safety
Wei-Chieh Huang 1, Henry Peng Zou 1, Yaozu Wu 2, Dongyuan Li 2, Yankai Chen 1, Weizhi Zhang 1, Yangning Li 3, Angelo Zangari 1, Jizhou Guo 4, Chunyu Miao 1, Liancheng Fang 1, Langzhou He 1, Yinghui Li 3, Renhe Jiang 2, Philip S. Yu 1
Published on arXiv
2510.10994
Prompt Injection
OWASP LLM Top 10 — LLM01
Excessive Agency
OWASP LLM Top 10 — LLM08
Key Finding
DeepResearchGuard improves defense success rates by 16.53% over baseline frameworks while keeping over-refusal rates at 6% across five frontier LLMs.
DeepResearchGuard
Novel technique introduced
Deep research frameworks have shown promising capabilities in synthesizing comprehensive reports from web sources. While deep research possesses significant potential to address complex issues through planning and research cycles, existing frameworks are deficient in sufficient evaluation procedures and stage-specific protections. They typically treat evaluation as exact match accuracy of question-answering, but overlook crucial aspects of report quality such as credibility, coherence, breadth, depth, and safety. This oversight may result in hazardous or malicious sources being integrated into the final report. To address this, we introduce DeepResearchGuard, a framework featuring four-stage safeguards with open-domain evaluation, and DRSafeBench, a novel stage-wise safety benchmark. Evaluating across GPT-4o, o4-mini, Gemini-2.5-flash, DeepSeek-v3, GPT-5, DeepResearchGuard improves defense success rates by 16.53% while reducing over-refusal to 6%. Through extensive experiments, we show that DRSafeBench enables comprehensive open-domain evaluation and stage-aware defenses that effectively block harmful content propagation, while systematically improving report quality without excessive over-refusal rates.
Key Contributions
- DeepResearchGuard: a four-stage safety guardrail framework for LLM deep-research agents covering query planning, source retrieval, content synthesis, and report generation
- DRSafeBench: a novel stage-wise safety benchmark with open-domain evaluation metrics (credibility, coherence, breadth, depth, safety) beyond QA accuracy
- Empirical demonstration of 16.53% improvement in defense success rate with only 6% over-refusal across GPT-4o, o4-mini, Gemini-2.5-flash, DeepSeek-v3, and GPT-5