attack 2026

Preference Redirection via Attention Concentration: An Attack on Computer Use Agents

Dominik Seip , Matthias Hein

0 citations

α

Published on arXiv

2604.08005

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Successfully redirects CUA product selection toward attacker-chosen targets by manipulating vision attention with imperceptible perturbations (L-infinity < 8/255), with attack transferring to fine-tuned model variants

PRAC

Novel technique introduced


Advancements in multimodal foundation models have enabled the development of Computer Use Agents (CUAs) capable of autonomously interacting with GUI environments. As CUAs are not restricted to certain tools, they allow to automate more complex agentic tasks but at the same time open up new security vulnerabilities. While prior work has concentrated on the language modality, the vulnerability of the vision modality has received less attention. In this paper, we introduce PRAC, a novel attack that, unlike prior work targeting the VLM output directly, manipulates the model's internal preferences by redirecting its attention toward a stealthy adversarial patch. We show that PRAC is able to manipulate the selection process of a CUA on an online shopping platform towards a chosen target product. While we require white-box access to the model for the creation of the attack, we show that our attack generalizes to fine-tuned versions of the same model, presenting a critical threat as multiple companies build specific CUAs based on open weights models.


Key Contributions

  • Novel attention manipulation attack (PRAC) that concentrates VLM attention on adversarial patches to redirect agent preferences
  • Demonstrates successful manipulation of computer use agents in realistic web browsing scenarios with stealthy perturbations
  • Shows attack transferability to fine-tuned versions of the target model, threatening commercial CUA deployments

🛡️ Threat Analysis

Prompt Injection

The attack targets the safety and decision-making of LLM-based agents (computer use agents), manipulating their behavior to select attacker-chosen products. This is a goal hijacking attack on agentic systems, which falls under LLM01's scope of prompt injection and agent manipulation.

Input Manipulation Attack

Adversarial perturbation attack on visual inputs to VLMs that causes misclassification/incorrect behavior at inference time. The attack crafts stealthy perturbations (L-infinity < 8/255) to manipulate the model's attention and decision-making process.


Details

Domains
multimodalvision
Model Types
vlmmultimodaltransformer
Threat Tags
white_boxinference_timetargeteddigital
Applications
computer use agentsautonomous gui navigationonline shopping agentsweb browsing agents