α

Published on arXiv

2508.05414

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Proposed framework achieves an average increase in attack success rate of 13.46% across distances and 11.03% across angles over state-of-the-art physical adversarial camouflage methods.


The advancement of deep object detectors has greatly affected safety-critical fields like autonomous driving. However, physical adversarial camouflage poses a significant security risk by altering object textures to deceive detectors. Existing techniques struggle with variable physical environments, facing two main challenges: 1) inconsistent sampling point densities across distances hinder the gradient optimization from ensuring local continuity, and 2) updating texture gradients from multiple angles causes conflicts, reducing optimization stability and attack effectiveness. To address these issues, we propose a novel adversarial camouflage framework based on gradient optimization. First, we introduce a gradient calibration strategy, which ensures consistent gradient updates across distances by propagating gradients from sparsely to unsampled texture points. Additionally, we develop a gradient decorrelation method, which prioritizes and orthogonalizes gradients based on loss values, enhancing stability and effectiveness in multi-angle optimization by eliminating redundant or conflicting updates. Extensive experimental results on various detection models, angles and distances show that our method significantly exceeds the state of the art, with an average increase in attack success rate (ASR) of 13.46% across distances and 11.03% across angles. Furthermore, empirical evaluation in real-world scenarios highlights the need for more robust system design.


Key Contributions

  • Gradient calibration strategy that propagates gradients from sparse to unsampled texture points, ensuring consistent updates across varying distances in physical environments.
  • Gradient decorrelation method that prioritizes and orthogonalizes gradients based on loss values to eliminate redundant or conflicting updates during multi-angle optimization.
  • Empirical demonstration of 13.46% and 11.03% average improvement in attack success rate across distances and angles respectively over state-of-the-art physical adversarial camouflage methods.

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a gradient-based physical adversarial attack that crafts object texture camouflage to cause misclassification/evasion in object detectors at inference time — a canonical adversarial patch/camouflage attack with physical deployment.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxphysicalinference_timetargeted
Applications
object detectionautonomous driving