attack 2026

Visual Confused Deputy: Exploiting and Defending Perception Failures in Computer-Using Agents

Xunzhuo Liu 1, Bowei He 2,3, Xue Liu 2,3, Andy Luo 4, Haichen Zhang 4, Huamin Chen 5

0 citations

α

Published on arXiv

2603.14707

Input Manipulation Attack

OWASP ML Top 10 — ML01

Output Integrity Attack

OWASP ML Top 10 — ML09

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Dual-channel guardrail achieves F1=0.915 on real GUI screenshots and F1=0.969 on OS-Harm, with image channel catching grounding errors (F1=0.889) and text channel detecting dangerous intent (F1=1.0 on neutral buttons)

ScreenSwap

Novel technique introduced


Computer-using agents (CUAs) act directly on graphical user interfaces, yet their perception of the screen is often unreliable. Existing work largely treats these failures as performance limitations, asking whether an action succeeds, rather than whether the agent is acting on the correct object at all. We argue that this is fundamentally a security problem. We formalize the visual confused deputy: a failure mode in which an agent authorizes an action based on a misperceived screen state, due to grounding errors, adversarial screenshot manipulation, or time-of-check-to-time-of-use (TOCTOU) races. This gap is practically exploitable: even simple screen-level manipulations can redirect routine clicks into privileged actions while remaining indistinguishable from ordinary agent mistakes. To mitigate this threat, we propose the first guardrail that operates outside the agent's perceptual loop. Our method, dual-channel contrastive classification, independently evaluates (1) the visual click target and (2) the agent's reasoning about the action against deployment-specific knowledge bases, and blocks execution if either channel indicates risk. The key insight is that these two channels capture complementary failure modes: visual evidence detects target-level mismatches, while textual reasoning reveals dangerous intent behind visually innocuous controls. Across controlled attacks, real GUI screenshots, and agent traces, the combined guardrail consistently outperforms either channel alone. Our results suggest that CUA safety requires not only better action generation, but independent verification of what the agent believes it is clicking and why. Materials are provided\footnote{Model, benchmark, and code: https://github.com/vllm-project/semantic-router}.


Key Contributions

  • Formalizes 'visual confused deputy' vulnerability class where CUA perception diverges from reality via grounding errors, adversarial manipulation, or TOCTOU races
  • Demonstrates ScreenSwap attack: 8-line pixel manipulation achieving privilege escalation indistinguishable from routine agent errors
  • Proposes dual-channel contrastive classification guardrail operating outside agent's perceptual loop, independently verifying visual click targets and textual reasoning

🛡️ Threat Analysis

Input Manipulation Attack

Paper demonstrates adversarial screenshot manipulation (ScreenSwap: pixel-level manipulation) that causes agents to misperceive screen state and execute unintended actions. This is input manipulation at inference time causing misclassification of visual grounding targets.

Output Integrity Attack

The defense mechanism (dual-channel contrastive classification) verifies output integrity by independently checking what the agent believes it is clicking (visual evidence) and why (textual reasoning) against ground truth, blocking execution if tampered perception is detected.


Details

Domains
visionmultimodalnlp
Model Types
vlmllmmultimodal
Threat Tags
inference_timeblack_boxdigital
Datasets
OSWorld-MCPScreenSpot-ProOS-Harm
Applications
computer-using agentsgui automationautonomous desktop agents