attack 2025

Guided Diffusion-based Generation of Adversarial Objects for Real-World Monocular Depth Estimation Attacks

Yongtao Chen , Yanbo Wang , Wentao Zhao , Guole Shen , Tianchen Deng , Jingchuan Wang

1 citations · 54 references · arXiv

α

Published on arXiv

2512.24111

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Significantly outperforms existing patch-based physical attacks on MDE in effectiveness, stealthiness, and physical deployability across both digital and real-world experiments

JVP-guided Diffusion Adversarial Generation

Novel technique introduced


Monocular Depth Estimation (MDE) serves as a core perception module in autonomous driving systems, but it remains highly susceptible to adversarial attacks. Errors in depth estimation may propagate through downstream decision making and influence overall traffic safety. Existing physical attacks primarily rely on texture-based patches, which impose strict placement constraints and exhibit limited realism, thereby reducing their effectiveness in complex driving environments. To overcome these limitations, this work introduces a training-free generative adversarial attack framework that generates naturalistic, scene-consistent adversarial objects via a diffusion-based conditional generation process. The framework incorporates a Salient Region Selection module that identifies regions most influential to MDE and a Jacobian Vector Product Guidance mechanism that steers adversarial gradients toward update directions supported by the pre-trained diffusion model. This formulation enables the generation of physically plausible adversarial objects capable of inducing substantial adversarial depth shifts. Extensive digital and physical experiments demonstrate that our method significantly outperforms existing attacks in effectiveness, stealthiness, and physical deployability, underscoring its strong practical implications for autonomous driving safety assessment.


Key Contributions

  • Training-free generative adversarial attack framework using diffusion-based conditional generation to produce naturalistic, scene-consistent adversarial objects without patch constraints
  • Jacobian Vector Product Guidance mechanism that steers diffusion model sampling toward adversarial gradient directions supported by the pre-trained model
  • Salient Region Selection module that identifies regions most influential to MDE depth predictions, focusing adversarial energy where it matters most

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a novel evasion/adversarial attack framework — using diffusion-guided generation steered by JVP gradients — that produces naturalistic physical adversarial objects causing incorrect predictions from MDE models at inference time; both digital and physical adversarial examples are demonstrated.


Details

Domains
vision
Model Types
diffusioncnntransformer
Threat Tags
white_boxinference_timetargeteddigitalphysical
Datasets
KITTI
Applications
monocular depth estimationautonomous driving perception