attack 2026

AdvSplat: Adversarial Attacks on Feed-Forward Gaussian Splatting Models

Yiran Qiao , Yiren Lu , Yunlai Zhou , Rui Yang , Linlin Hou , Yu Yin , Jing Ma

0 citations

α

Published on arXiv

2603.23686

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Imperceptible perturbations on input images significantly degrade reconstruction quality across multiple datasets in both white-box and black-box settings

AdvSplat

Novel technique introduced


3D Gaussian Splatting (3DGS) is increasingly recognized as a powerful paradigm for real-time, high-fidelity 3D reconstruction. However, its per-scene optimization pipeline limits scalability and generalization, and prevents efficient inference. Recently emerged feed-forward 3DGS models address these limitations by enabling fast reconstruction from a few input views after large-scale pretraining, without scene-specific optimization. Despite their advantages and strong potential for commercial deployment, the use of neural networks as the backbone also amplifies the risk of adversarial manipulation. In this paper, we introduce AdvSplat, the first systematic study of adversarial attacks on feed-forward 3DGS. We first employ white-box attacks to reveal fundamental vulnerabilities of this model family. We then develop two improved, practically relevant, query-efficient black-box algorithms that optimize pixel-space perturbations via a frequency-domain parameterization: one based on gradient estimation and the other gradient-free, without requiring any access to model internals. Extensive experiments across multiple datasets demonstrate that AdvSplat can significantly disrupt reconstruction results by injecting imperceptible perturbations into the input images. Our findings surface an overlooked yet urgent problem in this domain, and we hope to draw the community's attention to this emerging security and robustness challenge.


Key Contributions

  • First systematic adversarial attack study on feed-forward 3D Gaussian Splatting models
  • White-box attacks revealing fundamental vulnerabilities of feed-forward 3DGS
  • Two query-efficient black-box attack algorithms using frequency-domain parameterization: gradient estimation-based and gradient-free

🛡️ Threat Analysis

Input Manipulation Attack

Develops gradient-based white-box attacks (PGD) and black-box attacks using frequency-domain perturbations to cause misreconstruction at inference time - classic adversarial example attacks on neural network backbone of 3DGS models.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxblack_boxinference_timedigital
Datasets
RE10K
Applications
3d reconstructionnovel view synthesis