defense 2026

Clouding the Mirror: Stealthy Prompt Injection Attacks Targeting LLM-based Phishing Detection

Takashi Koide , Hiroki Nakano , Daiki Chiba

0 citations · 54 references · arXiv (Cornell University)

α

Published on arXiv

2602.05484

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

PI attacks exploiting perceptual asymmetry succeed against GPT-5-based phishing detectors; InjectDefuser significantly reduces attack success rates across diverse models and attack patterns.

InjectDefuser

Novel technique introduced


Phishing sites continue to grow in volume and sophistication. Recent work leverages large language models (LLMs) to analyze URLs, HTML, and rendered content to decide whether a website is a phishing site. While these approaches are promising, LLMs are inherently vulnerable to prompt injection (PI). Because attackers can fully control various elements of phishing sites, this creates the potential for PI that exploits the perceptual asymmetry between LLMs and humans: instructions imperceptible to end users can still be parsed by the LLM and can stealthily manipulate its judgment. The specific risks of PI in phishing detection and effective mitigation strategies remain largely unexplored. This paper presents the first comprehensive evaluation of PI against multimodal LLM-based phishing detection. We introduce a two-dimensional taxonomy, defined by Attack Techniques and Attack Surfaces, that captures realistic PI strategies. Using this taxonomy, we implement diverse attacks and empirically study several representative LLM-based detection systems. The results show that phishing detection with state-of-the-art models such as GPT-5 remains vulnerable to PI. We then propose InjectDefuser, a defense framework that combines prompt hardening, allowlist-based retrieval augmentation, and output validation. Across multiple models, InjectDefuser significantly reduces attack success rates. Our findings clarify the PI risk landscape and offer practical defenses that improve the reliability of next-generation phishing countermeasures.


Key Contributions

  • First comprehensive two-dimensional taxonomy of prompt injection attacks (Attack Techniques × Attack Surfaces) in the LLM-based phishing detection context
  • Empirical evaluation showing that state-of-the-art LLMs including GPT-5 remain vulnerable to PI attacks embedded in phishing site content
  • InjectDefuser: a defense framework combining prompt hardening, allowlist-based retrieval augmentation, and output validation that significantly reduces PI attack success rates across multiple LLM vendors

🛡️ Threat Analysis


Details

Domains
nlpmultimodal
Model Types
llmvlm
Threat Tags
black_boxinference_time
Applications
phishing detectionllm-based web security systems