attack 2025

Universal Camouflage Attack on Vision-Language Models for Autonomous Driving

Dehong Kong 1, Sifan Yu 1, Siyuan Liang 2, Jiawei Liang 1, Jianhou Gan 3, Aishan Liu 4, Wenqi Ren 1

0 citations · arXiv

α

Published on arXiv

2509.20196

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

UCA improves attack success by 30% in 3-P metrics over state-of-the-art attacks and demonstrates strong physical robustness under diverse viewpoints and dynamic driving conditions across multiple VLM-AD architectures.

Universal Camouflage Attack (UCA) with Feature Divergence Loss (FDL)

Novel technique introduced


Visual language modeling for automated driving is emerging as a promising research direction with substantial improvements in multimodal reasoning capabilities. Despite its advanced reasoning abilities, VLM-AD remains vulnerable to serious security threats from adversarial attacks, which involve misleading model decisions through carefully crafted perturbations. Existing attacks have obvious challenges: 1) Physical adversarial attacks primarily target vision modules. They are difficult to directly transfer to VLM-AD systems because they typically attack low-level perceptual components. 2) Adversarial attacks against VLM-AD have largely concentrated on the digital level. To address these challenges, we propose the first Universal Camouflage Attack (UCA) framework for VLM-AD. Unlike previous methods that focus on optimizing the logit layer, UCA operates in the feature space to generate physically realizable camouflage textures that exhibit strong generalization across different user commands and model architectures. Motivated by the observed vulnerability of encoder and projection layers in VLM-AD, UCA introduces a feature divergence loss (FDL) that maximizes the representational discrepancy between clean and adversarial images. In addition, UCA incorporates a multi-scale learning strategy and adjusts the sampling ratio to enhance its adaptability to changes in scale and viewpoint diversity in real-world scenarios, thereby improving training stability. Extensive experiments demonstrate that UCA can induce incorrect driving commands across various VLM-AD models and driving scenarios, significantly surpassing existing state-of-the-art attack methods (improving 30\% in 3-P metrics). Furthermore, UCA exhibits strong attack robustness under diverse viewpoints and dynamic conditions, indicating high potential for practical deployment.


Key Contributions

  • Universal Camouflage Attack (UCA) framework targeting VLM-AD systems via feature-space optimization rather than logit-layer optimization, enabling stronger physical transferability.
  • Feature Divergence Loss (FDL) that maximizes the representational discrepancy between clean and adversarial images in encoder and projection layers of VLMs.
  • Multi-scale learning strategy with adaptive sampling ratio to improve robustness to viewpoint and scale changes in real-world physical deployment.

🛡️ Threat Analysis

Input Manipulation Attack

The paper crafts physically realizable adversarial camouflage textures (perturbations on visual inputs) that cause VLM-AD models to produce incorrect driving commands at inference time. The Feature Divergence Loss (FDL) maximizes representational discrepancy between clean and adversarial images in the model's feature space — a novel gradient-based adversarial attack technique operating on visual inputs.


Details

Domains
visionmultimodal
Model Types
vlmtransformermultimodal
Threat Tags
white_boxphysicalinference_timeuntargeted
Applications
autonomous drivingvision-language modelsmultimodal driving command generation