tool 2025

UNDREAM: Bridging Differentiable Rendering and Photorealistic Simulation for End-to-end Adversarial Attacks

Mansi Phute , Matthew Hull , Haoran Wang , Alec Helbling , ShengYun Peng , Willian Lunardi , Martin Andreoni , Wenke Lee , Duen Horng Chau

0 citations · 28 references · arXiv

α

Published on arXiv

2510.16923

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

End-to-end optimization within photorealistic simulation produces significantly more effective physical adversarial attacks than the existing pipeline of optimizing outside simulation and reinserting, which suffers dramatic performance degradation due to unmodeled lighting and material interactions.

UNDREAM

Novel technique introduced


Deep learning models deployed in safety critical applications like autonomous driving use simulations to test their robustness against adversarial attacks in realistic conditions. However, these simulations are non-differentiable, forcing researchers to create attacks that do not integrate simulation environmental factors, reducing attack success. To address this limitation, we introduce UNDREAM, the first software framework that bridges the gap between photorealistic simulators and differentiable renderers to enable end-to-end optimization of adversarial perturbations on any 3D objects. UNDREAM enables manipulation of the environment by offering complete control over weather, lighting, backgrounds, camera angles, trajectories, and realistic human and object movements, thereby allowing the creation of diverse scenes. We showcase a wide array of distinct physically plausible adversarial objects that UNDREAM enables researchers to swiftly explore in different configurable environments. This combination of photorealistic simulation and differentiable optimization opens new avenues for advancing research of physical adversarial attacks.


Key Contributions

  • UNDREAM: first software framework bridging photorealistic simulators (Unreal Engine) with differentiable renderers for end-to-end adversarial texture optimization on arbitrarily shaped 3D objects
  • Automatic 3D transformation pipeline that embeds adversarial textures natively in simulation, preserving lighting, perspective, and material interactions during gradient optimization
  • Open-source implementation enabling researchers to optimize physical adversarial attacks across configurable environments (weather, lighting, camera angle, trajectories) with minimal code changes

🛡️ Threat Analysis

Input Manipulation Attack

Paper proposes adversarial patch/texture optimization on 3D objects to cause object detection misclassification at inference time; uses gradient-based (differentiable) optimization and includes physically-plausible adversarial objects in realistic simulation environments — a direct physical adversarial example attack framework.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxphysicaldigitalinference_time
Applications
object detectionautonomous driving