MalURLBench: A Benchmark Evaluating Agents' Vulnerabilities When Processing Web URLs
Dezhang Kong 1, Zhuxi Wu 2, Shiqi Liu 3, Zhicheng Tan 4, Kuichen Lu 4, Minghao Li 5, Qichen Liu 6, Shengyu Chu 4, Zhenhua Xu 1, Xuan Liu 4, Meng Han 1
Published on arXiv
2601.18113
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
All 12 evaluated LLMs struggle to detect elaborately disguised malicious URLs, demonstrating a critical and previously unbenchmarked vulnerability in LLM-based web agents.
MalURLBench
Novel technique introduced
LLM-based web agents have become increasingly popular for their utility in daily life and work. However, they exhibit critical vulnerabilities when processing malicious URLs: accepting a disguised malicious URL enables subsequent access to unsafe webpages, which can cause severe damage to service providers and users. Despite this risk, no benchmark currently targets this emerging threat. To address this gap, we propose MalURLBench, the first benchmark for evaluating LLMs' vulnerabilities to malicious URLs. MalURLBench contains 61,845 attack instances spanning 10 real-world scenarios and 7 categories of real malicious websites. Experiments with 12 popular LLMs reveal that existing models struggle to detect elaborately disguised malicious URLs. We further identify and analyze key factors that impact attack success rates and propose URLGuard, a lightweight defense module. We believe this work will provide a foundational resource for advancing the security of web agents. Our code is available at https://github.com/JiangYingEr/MalURLBench.
Key Contributions
- MalURLBench: the first benchmark with 61,845 attack instances across 10 real-world scenarios and 7 malicious URL categories for evaluating LLM web agent vulnerabilities
- Empirical evaluation of 12 popular LLMs showing they consistently fail to detect elaborately disguised malicious URLs
- URLGuard: a lightweight defense module and analysis of key factors influencing attack success rates