attack 2026

Beyond the Patch: Exploring Vulnerabilities of Visuomotor Policies via Viewpoint-Consistent 3D Adversarial Object

Chanmi Lee , Minsung Yoon , Woojae Kim , Sebin Lee , Sung-eui Yoon

0 citations

α

Published on arXiv

2603.04913

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

The proposed 3D adversarial texture method outperforms conventional 2D adversarial patches under dynamic viewpoints, demonstrates black-box transferability, and is validated in real-world robotic manipulation settings.

Viewpoint-Consistent 3D Adversarial Texture Optimization

Novel technique introduced


Neural network-based visuomotor policies enable robots to perform manipulation tasks but remain susceptible to perceptual attacks. For example, conventional 2D adversarial patches are effective under fixed-camera setups, where appearance is relatively consistent; however, their efficacy often diminishes under dynamic viewpoints from moving cameras, such as wrist-mounted setups, due to perspective distortions. To proactively investigate potential vulnerabilities beyond 2D patches, this work proposes a viewpoint-consistent adversarial texture optimization method for 3D objects through differentiable rendering. As optimization strategies, we employ Expectation over Transformation (EOT) with a Coarse-to-Fine (C2F) curriculum, exploiting distance-dependent frequency characteristics to induce textures effective across varying camera-object distances. We further integrate saliency-guided perturbations to redirect policy attention and design a targeted loss that persistently drives robots toward adversarial objects. Our comprehensive experiments show that the proposed method is effective under various environmental conditions, while confirming its black-box transferability and real-world applicability.


Key Contributions

  • Viewpoint-consistent 3D adversarial texture optimization via differentiable rendering that remains effective under dynamic camera viewpoints (e.g., wrist-mounted setups)
  • Coarse-to-Fine (C2F) curriculum combined with Expectation over Transformation (EOT) that exploits distance-dependent frequency characteristics for multi-distance robustness
  • Saliency-guided perturbations that redirect policy attention toward adversarial objects, with a targeted loss persistently driving robots to the adversarial target

🛡️ Threat Analysis

Input Manipulation Attack

Crafts adversarial inputs (3D object textures) at inference time that cause visuomotor policy misclassification and incorrect robot behavior — a physical adversarial attack with novel gradient-based optimization (EOT+C2F curriculum, differentiable rendering) to maintain effectiveness across viewpoints and distances.


Details

Domains
visionreinforcement-learning
Model Types
cnntransformer
Threat Tags
white_boxblack_boxinference_timetargetedphysical
Applications
robotic manipulationvisuomotor policies