α

Published on arXiv

2510.11851

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Harmful queries rejected by standalone LLMs elicit detailed, professional-quality dangerous reports when submitted to DR agents; the proposed jailbreaks further amplify success rates across multiple LLMs and safety benchmarks.

Plan Injection / Intent Hijack

Novel technique introduced


Deep Research (DR) agents built on Large Language Models (LLMs) can perform complex, multi-step research by decomposing tasks, retrieving online information, and synthesizing detailed reports. However, the misuse of LLMs with such powerful capabilities can lead to even greater risks. This is especially concerning in high-stakes and knowledge-intensive domains such as biosecurity, where DR can generate a professional report containing detailed forbidden knowledge. Unfortunately, we have found such risks in practice: simply submitting a harmful query, which a standalone LLM directly rejects, can elicit a detailed and dangerous report from DR agents. This highlights the elevated risks and underscores the need for a deeper safety analysis. Yet, jailbreak methods designed for LLMs fall short in exposing such unique risks, as they do not target the research ability of DR agents. To address this gap, we propose two novel jailbreak strategies: Plan Injection, which injects malicious sub-goals into the agent's plan; and Intent Hijack, which reframes harmful queries as academic research questions. We conducted extensive experiments across different LLMs and various safety benchmarks, including general and biosecurity forbidden prompts. These experiments reveal 3 key findings: (1) Alignment of the LLMs often fail in DR agents, where harmful prompts framed in academic terms can hijack agent intent; (2) Multi-step planning and execution weaken the alignment, revealing systemic vulnerabilities that prompt-level safeguards cannot address; (3) DR agents not only bypass refusals but also produce more coherent, professional, and dangerous content, compared with standalone LLMs. These results demonstrate a fundamental misalignment in DR agents and call for better alignment techniques tailored to DR agents. Code and datasets are available at https://chenxshuo.github.io/deeper-harm.


Key Contributions

  • Plan Injection: a jailbreak attack that embeds malicious sub-goals into a DR agent's multi-step research plan, exploiting the planning-execution pipeline to bypass alignment
  • Intent Hijack: a jailbreak strategy that reframes harmful queries as academic research questions to subvert LLM safety guardrails within agent context
  • Empirical demonstration that DR agents produce more coherent and dangerous outputs than standalone LLMs, and that prompt-level safeguards are insufficient against agentic multi-step execution

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
SciSafeEvalgeneral forbidden prompts benchmarkbiosecurity forbidden prompts benchmark
Applications
llm research agentsbiosecurity knowledge retrievalmulti-step ai agents