attack 2026

AgentRAE: Remote Action Execution through Notification-based Visual Backdoors against Screenshots-based Mobile GUI Agents

Yutao Luo 1, Haotian Zhu 1, Shuchao Pang 1,2, Zhigang Lu 3, Tian Dong 4, Yongbin Zhou 1, Minhui Xue 5

0 citations

α

Published on arXiv

2603.23007

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Achieves over 90% attack success rate across ten mobile operations while maintaining clean performance and evading eight state-of-the-art backdoor defenses

AgentRAE

Novel technique introduced


The rapid adoption of mobile graphical user interface (GUI) agents, which autonomously control applications and operating systems (OS), exposes new system-level attack surfaces. Existing backdoors against web GUI agents and general GenAI models rely on environmental injection or deceptive pop-ups to mislead the agent operation. However, these techniques do not work on screenshots-based mobile GUI agents due to the challenges of restricted trigger design spaces, OS background interference, and conflicts in multiple trigger-action mappings. We propose AgentRAE, a novel backdoor attack capable of inducing Remote Action Execution in mobile GUI agents using visually natural triggers (e.g., benign app icons in notifications). To address the underfitting caused by natural triggers and achieve accurate multi-target action redirection, we design a novel two-stage pipeline that first enhances the agent's sensitivity to subtle iconographic differences via contrastive learning, and then associates each trigger with a specific mobile GUI agent action through a backdoor post-training. Our extensive evaluation reveals that the proposed backdoor preserves clean performance with an attack success rate of over 90% across ten mobile operations. Furthermore, it is hard to visibly detect the benign-looking triggers and circumvents eight representative state-of-the-art defenses. These results expose an overlooked backdoor vector in mobile GUI agents, underscoring the need for defenses that scrutinize notification-conditioned behaviors and internal agent representations.


Key Contributions

  • Novel two-stage backdoor pipeline using contrastive learning to enhance sensitivity to subtle iconographic differences in notification triggers
  • Achieves accurate multi-target action redirection mapping natural triggers (benign app icons) to specific mobile GUI agent actions
  • Demonstrates 90%+ attack success rate across ten mobile operations while preserving clean performance and evading eight SOTA defenses

🛡️ Threat Analysis

Model Poisoning

Core contribution is a backdoor attack that embeds hidden, targeted malicious behavior in mobile GUI agents (MLLMs) that activates with specific visual triggers (notification icons). Uses a two-stage training pipeline: contrastive learning to enhance trigger sensitivity, then backdoor post-training to associate triggers with specific adversary-intended actions. The agent behaves normally without triggers but executes malicious actions when triggered.


Details

Domains
visionmultimodal
Model Types
vlmmultimodaltransformer
Threat Tags
training_timetargeteddigital
Applications
mobile gui agentsautonomous mobile assistantsmllm-based os control