defense 2026

NutVLM: A Self-Adaptive Defense Framework against Full-Dimension Attacks for Vision Language Models in Autonomous Driving

Xiaoxu Peng 1, Dong Zhou 1, Jianwen Zhang 1, Guanghui Sun 1, Anh Tu Ngo 2, Anupam Chattopadhyay 2

0 citations · 62 references · arXiv (Cornell University)

α

Published on arXiv

2602.13293

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

NutVLM achieves a 4.89% improvement in Accuracy, Language Score, and GPT Score on the Dolphins benchmark under combined local-patch and global-perturbation adversarial attacks

NutVLM

Novel technique introduced


Vision Language Models (VLMs) have advanced perception in autonomous driving (AD), but they remain vulnerable to adversarial threats. These risks range from localized physical patches to imperceptible global perturbations. Existing defense methods for VLMs remain limited and often fail to reconcile robustness with clean-sample performance. To bridge these gaps, we propose NutVLM, a comprehensive self-adaptive defense framework designed to secure the entire perception-decision lifecycle. Specifically, we first employ NutNet++ as a sentinel, which is a unified detection-purification mechanism. It identifies benign samples, local patches, and global perturbations through three-way classification. Subsequently, localized threats are purified via efficient grayscale masking, while global perturbations trigger Expert-guided Adversarial Prompt Tuning (EAPT). Instead of the costly parameter updates of full-model fine-tuning, EAPT generates "corrective driving prompts" via gradient-based latent optimization and discrete projection. These prompts refocus the VLM's attention without requiring exhaustive full-model retraining. Evaluated on the Dolphins benchmark, our NutVLM yields a 4.89% improvement in overall metrics (e.g., Accuracy, Language Score, and GPT Score). These results validate NutVLM as a scalable security solution for intelligent transportation. Our code is available at https://github.com/PXX/NutVLM.


Key Contributions

  • NutNet++: a three-way sentinel classifying inputs as benign, local patch attack, or global perturbation, enabling targeted defenses per attack type
  • EAPT (Expert-guided Adversarial Prompt Tuning): gradient-based latent optimization that generates corrective driving prompts to refocus VLM attention without full model retraining
  • End-to-end NutVLM framework achieving 4.89% overall improvement on the Dolphins autonomous driving benchmark against full-dimension adversarial threats

🛡️ Threat Analysis

Input Manipulation Attack

Defense against adversarial visual inputs to VLMs at inference time — both local adversarial patches and global imperceptible perturbations — via detection (NutNet++) and purification (grayscale masking, EAPT).


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
white_boxinference_timedigitalphysical
Datasets
Dolphins
Applications
autonomous drivingvision-language models