attack 2025

StealthAttack: Robust 3D Gaussian Splatting Poisoning via Density-Guided Illusions

Bo-Hsu Ke , You-Zhe Xie , Yu-Lun Liu , Wei-Chen Chiu

2 citations · 110 references · arXiv

α

Published on arXiv

2510.02314

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Proposed density-guided poisoning method achieves superior attack performance compared to state-of-the-art techniques while minimally affecting innocent viewpoints

StealthAttack

Novel technique introduced


3D scene representation methods like Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have significantly advanced novel view synthesis. As these methods become prevalent, addressing their vulnerabilities becomes critical. We analyze 3DGS robustness against image-level poisoning attacks and propose a novel density-guided poisoning method. Our method strategically injects Gaussian points into low-density regions identified via Kernel Density Estimation (KDE), embedding viewpoint-dependent illusory objects clearly visible from poisoned views while minimally affecting innocent views. Additionally, we introduce an adaptive noise strategy to disrupt multi-view consistency, further enhancing attack effectiveness. We propose a KDE-based evaluation protocol to assess attack difficulty systematically, enabling objective benchmarking for future research. Extensive experiments demonstrate our method's superior performance compared to state-of-the-art techniques. Project page: https://hentci.github.io/stealthattack/


Key Contributions

  • Density-guided poisoning method using KDE to identify low-density regions in 3DGS for strategic Gaussian point injection that embeds viewpoint-dependent illusions
  • Adaptive noise strategy that disrupts multi-view consistency to enhance attack stealth and effectiveness
  • KDE-based evaluation protocol that systematically quantifies attack difficulty, enabling objective benchmarking for 3DGS poisoning research

🛡️ Threat Analysis

Data Poisoning Attack

The attack corrupts training data (multi-view images) fed to the 3DGS model — specifically injecting adversarial Gaussian points that degrade scene integrity for targeted viewpoints while preserving appearance from innocent views. The attack vector is image-level training data poisoning.


Details

Domains
vision
Model Types
generative
Threat Tags
training_timetargeteddigitalwhite_box
Applications
novel view synthesis3d scene reconstruction