attack 2025

ComplicitSplat: Downstream Models are Vulnerable to Blackbox Attacks by 3D Gaussian Splat Camouflages

Matthew Hull 1, Haoyang Yang 1, Pratham Mehta 1, Mansi Phute 1, Aeree Cho 1, Haorang Wang 1, Matthew Lau 1, Wenke Lee 1, Wilian Lunardi 2, Martin Andreoni 2, Duen Horng Chau 1

0 citations

α

Published on arXiv

2508.11854

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Successfully evades YOLO v3/v5/v8/v11, FasterRCNN, and DETR detectors using viewpoint-specific adversarial camouflages embedded in 3DGS via spherical harmonics manipulation, without any model access.

ComplicitSplat

Novel technique introduced


As 3D Gaussian Splatting (3DGS) gains rapid adoption in safety-critical tasks for efficient novel-view synthesis from static images, how might an adversary tamper images to cause harm? We introduce ComplicitSplat, the first attack that exploits standard 3DGS shading methods to create viewpoint-specific camouflage - colors and textures that change with viewing angle - to embed adversarial content in scene objects that are visible only from specific viewpoints and without requiring access to model architecture or weights. Our extensive experiments show that ComplicitSplat generalizes to successfully attack a variety of popular detector - both single-stage, multi-stage, and transformer-based models on both real-world capture of physical objects and synthetic scenes. To our knowledge, this is the first black-box attack on downstream object detectors using 3DGS, exposing a novel safety risk for applications like autonomous navigation and other mission-critical robotic systems.


Key Contributions

  • First attack exploiting 3DGS spherical harmonics shading to embed multiple viewpoint-specific adversarial camouflages in 3D scenes
  • Black-box attack requiring no access to downstream model architecture or weights, generalizing across YOLO (v3/v5/v8/v11), FasterRCNN, and DETR detectors
  • Demonstrated on both real-world physically-captured 3DGS scenes and synthetic 3DGS scenes, with first public code and data release for camouflaged 3DGS attacks

🛡️ Threat Analysis

Input Manipulation Attack

Creates adversarial inputs by manipulating 3DGS source images to produce rendered views that cause misclassification in downstream object detectors at inference time — a black-box evasion attack requiring no access to model weights or architecture.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
black_boxinference_timetargetedphysicaldigital
Datasets
Custom real-world captured 3DGS scenesSynthetic 3DGS scenes
Applications
object detectionautonomous drivingrobotic navigationaerial surveillance