defense 2026

MirrorGuard: Toward Secure Computer-Use Agents via Simulation-to-Real Reasoning Correction

Wenqi Zhang 1, Yulin Shen 1, Changyue Jiang 1,2, Jiarun Dai 1, Geng Hong 1, Xudong Pan 1,2

0 citations · 39 references · arXiv

α

Published on arXiv

2601.12822

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

MirrorGuard reduces the unsafe action rate on ByteDance UI-TARS from 66.5% to 13.0% while maintaining a marginal false refusal rate, compared to GuardAgent which only achieves 53.9% with 15.4% higher FRR.

MirrorGuard

Novel technique introduced


Large foundation models are integrated into Computer Use Agents (CUAs), enabling autonomous interaction with operating systems through graphical user interfaces (GUIs) to perform complex tasks. This autonomy introduces serious security risks: malicious instructions or visual prompt injections can trigger unsafe reasoning and cause harmful system-level actions. Existing defenses, such as detection-based blocking, prevent damage but often abort tasks prematurely, reducing agent utility. In this paper, we present MirrorGuard, a plug-and-play defense framework that uses simulation-based training to improve CUA security in the real world. To reduce the cost of large-scale training in operating systems, we propose a novel neural-symbolic simulation pipeline, which generates realistic, high-risk GUI interaction trajectories entirely in a text-based simulated environment, which captures unsafe reasoning patterns and potential system hazards without executing real operations. In the simulation environment, MirrorGuard learns to intercept and rectify insecure reasoning chains of CUAs before they produce and execute unsafe actions. In real-world testing, extensive evaluations across diverse benchmarks and CUA architectures show that MirrorGuard significantly mitigates security risks. For instance, on the ByteDance UI-TARS system, it reduces the unsafe rate from 66.5% to 13.0% while maintaining a marginal false refusal rate (FRR). In contrast, the state-of-the-art GuardAgent only achieves a reduction to 53.9% and suffers from a 15.4% higher FRR. Our work proves that simulation-derived defenses can provide robust, real-world protection while maintaining the fundamental utility of the agent. Our code and model are publicly available at https://bmz-q-q.github.io/MirrorGuard/.


Key Contributions

  • MirrorGuard: a plug-and-play defense framework that intercepts and corrects unsafe reasoning chains in CUAs before unsafe actions are executed
  • Neural-symbolic simulation pipeline (MirrorWorld) that synthesizes realistic high-risk GUI trajectories in a text-based environment, eliminating the cost of real OS-level training
  • Reduces unsafe action rate on ByteDance UI-TARS from 66.5% to 13.0% with near-zero false refusal rate, substantially outperforming the GuardAgent baseline (53.9% reduction)

🛡️ Threat Analysis


Details

Domains
nlpvisionmultimodal
Model Types
llmvlm
Threat Tags
inference_timeblack_box
Datasets
UI-TARS benchmarkOSWorldWindowsAgentArena
Applications
computer use agentsautonomous gui agentsoperating system automation