attack 2025

Practical and Stealthy Touch-Guided Jailbreak Attacks on Deployed Mobile Vision-Language Agents

Renhua Ding 1,2, Xiao Yang 2, Zhengwei Fang 2, Jun Luo 2, Kun He 1, Jun Zhu 2

1 citations · 27 references · arXiv

α

Published on arXiv

2510.07809

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Achieves 82.5% planning and 75.0% execution hijack rates against GPT-4o-backed mobile agents across three representative Android applications

HG-IDA*

Novel technique introduced


Large vision-language models (LVLMs) enable autonomous mobile agents to operate smartphone user interfaces, yet vulnerabilities in their perception and interaction remain critically understudied. Existing research often relies on conspicuous overlays, elevated permissions, or unrealistic threat assumptions, limiting stealth and real-world feasibility. In this paper, we introduce a practical and stealthy jailbreak attack framework, which comprises three key components: (i) non-privileged perception compromise, which injects visual payloads into the application interface without requiring elevated system permissions; (ii) agent-attributable activation, which leverages input attribution signals to distinguish agent from human interactions and limits prompt exposure to transient intervals to preserve stealth from end users; and (iii) efficient one-shot jailbreak, a heuristic iterative deepening search algorithm (HG-IDA*) that performs keyword-level detoxification to bypass built-in safety alignment of LVLMs. Moreover, we developed three representative Android applications and curated a prompt-injection dataset for mobile agents. We evaluated our attack across multiple LVLM backends, including closed-source services and representative open-source models, and observed high planning and execution hijack rates (e.g., GPT-4o: 82.5% planning / 75.0% execution), exposing a fundamental security vulnerability in current mobile agents and underscoring critical implications for autonomous smartphone operation.


Key Contributions

  • Non-privileged visual payload injection into Android app UIs without elevated permissions, enabling stealthy delivery of jailbreak content to LVLM-based mobile agents
  • Agent-attributable activation mechanism using input attribution signals to distinguish agent from human interactions, exposing injected prompts only during transient agent-perception windows to evade user detection
  • HG-IDA* heuristic iterative deepening search for one-shot keyword-level detoxification that bypasses LVLM safety alignment with high success rates across both open- and closed-source models

🛡️ Threat Analysis


Details

Domains
multimodalvisionnlp
Model Types
vlmllmmultimodal
Threat Tags
black_boxinference_timetargeteddigital
Datasets
custom mobile agent prompt-injection dataset
Applications
mobile agentsautonomous smartphone operationandroid applications