defense 2026

Spectral Defense Against Resource-Targeting Attack in 3D Gaussian Splatting

Yang Chen 1, Yi Yu 1, Jiaming He 2, Yueqi Duan 3, Zheng Zhu 4, Yap-Peng Tan 5,1

0 citations

α

Published on arXiv

2603.12796

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Suppresses Gaussian overgrowth by up to 5.92×, reduces memory by up to 3.66×, and improves rendering speed by up to 4.34× under poisoning attacks

Spectral Defense

Novel technique introduced


Recent advances in 3D Gaussian Splatting (3DGS) deliver high-quality rendering, yet the Gaussian representation exposes a new attack surface, the resource-targeting attack. This attack poisons training images, excessively inducing Gaussian growth to cause resource exhaustion. Although efficiency-oriented methods such as smoothing, thresholding, and pruning have been explored, these spatial-domain strategies operate on visible structures but overlook how stealthy perturbations distort the underlying spectral behaviors of training data. As a result, poisoned inputs introduce abnormal high-frequency amplifications that mislead 3DGS into interpreting noisy patterns as detailed structures, ultimately causing unstable Gaussian overgrowth and degraded scene fidelity. To address this, we propose \textbf{Spectral Defense} in Gaussian and image fields. We first design a 3D frequency filter to selectively prune Gaussians exhibiting abnormally high frequencies. Since natural scenes also contain legitimate high-frequency structures, directly suppressing high frequencies is insufficient, and we further develop a 2D spectral regularization on renderings, distinguishing naturally isotropic frequencies while penalizing anisotropic angular energy to constrain noisy patterns. Experiments show that our defense builds robust, accurate, and secure 3DGS, suppressing overgrowth by up to $5.92\times$, reducing memory by up to $3.66\times$, and improving speed by up to $4.34\times$ under attacks.


Key Contributions

  • 3D frequency filter that selectively prunes Gaussians with abnormally high-frequency responses to suppress attack-induced overgrowth
  • 2D spectral regularization that distinguishes natural isotropic frequencies from attack-induced anisotropic noise patterns
  • Joint spectral defense operating in both 3D Gaussian space and 2D rendering space to defend against resource-targeting poisoning attacks

🛡️ Threat Analysis

Data Poisoning Attack

The paper addresses a data poisoning attack where adversaries manipulate training images to cause resource exhaustion by triggering excessive Gaussian growth during 3DGS training. The attack vector is poisoned training data, and the defense operates on both the training data (2D spectral regularization on input images) and the model behavior (3D frequency filtering of Gaussians). This is a clear training-time data poisoning scenario.


Details

Domains
vision
Model Types
traditional_ml
Threat Tags
training_time
Applications
3d scene reconstructionnovel view synthesis