attack 2026

On the Adversarial Robustness of 3D Large Vision-Language Models

Chao Liu , Ngai-Man Cheung

0 citations · 44 references · arXiv

α

Published on arXiv

2601.06464

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

3D VLMs are highly vulnerable to untargeted adversarial attacks but show greater resilience than 2D VLMs against targeted attacks aimed at forcing specific harmful outputs

Vision Attack / Caption Attack

Novel technique introduced


3D Vision-Language Models (VLMs), such as PointLLM and GPT4Point, have shown strong reasoning and generalization abilities in 3D understanding tasks. However, their adversarial robustness remains largely unexplored. Prior work in 2D VLMs has shown that the integration of visual inputs significantly increases vulnerability to adversarial attacks, making these models easier to manipulate into generating toxic or misleading outputs. In this paper, we investigate whether incorporating 3D vision similarly compromises the robustness of 3D VLMs. To this end, we present the first systematic study of adversarial robustness in point-based 3D VLMs. We propose two complementary attack strategies: \textit{Vision Attack}, which perturbs the visual token features produced by the 3D encoder and projector to assess the robustness of vision-language alignment; and \textit{Caption Attack}, which directly manipulates output token sequences to evaluate end-to-end system robustness. Each attack includes both untargeted and targeted variants to measure general vulnerability and susceptibility to controlled manipulation. Our experiments reveal that 3D VLMs exhibit significant adversarial vulnerabilities under untargeted attacks, while demonstrating greater resilience against targeted attacks aimed at forcing specific harmful outputs, compared to their 2D counterparts. These findings highlight the importance of improving the adversarial robustness of 3D VLMs, especially as they are deployed in safety-critical applications.


Key Contributions

  • First systematic adversarial robustness study of point-based 3D VLMs (PointLLM, GPT4Point)
  • Vision Attack: perturbs visual token features from the 3D encoder/projector to probe vision-language alignment robustness
  • Caption Attack: manipulates output token sequences end-to-end with both untargeted and targeted variants to force specific harmful outputs

🛡️ Threat Analysis

Input Manipulation Attack

Both proposed attacks (Vision Attack, Caption Attack) are gradient-based adversarial perturbations — Vision Attack perturbs visual token features from the 3D encoder/projector, Caption Attack manipulates output token sequences — both at inference time, fitting the adversarial evasion attack definition squarely.


Details

Domains
visionnlpmultimodal
Model Types
vlmllmtransformer
Threat Tags
white_boxinference_timetargeteduntargeteddigital
Applications
3d scene understandingpoint cloud question answering3d visual grounding