defense 2025

RemedyGS: Defend 3D Gaussian Splatting against Computation Cost Attacks

Yanping Li , Zhening Liu , Zijian Li , Zehong Lin , Jun Zhang

1 citations · 49 references · arXiv

α

Published on arXiv

2511.22147

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

RemedyGS achieves state-of-the-art defense performance against white-box, black-box, and adaptive computation cost attacks on 3DGS while maintaining reconstruction utility

RemedyGS

Novel technique introduced


As a mainstream technique for 3D reconstruction, 3D Gaussian splatting (3DGS) has been applied in a wide range of applications and services. Recent studies have revealed critical vulnerabilities in this pipeline and introduced computation cost attacks that lead to malicious resource occupancies and even denial-of-service (DoS) conditions, thereby hindering the reliable deployment of 3DGS. In this paper, we propose the first effective and comprehensive black-box defense framework, named RemedyGS, against such computation cost attacks, safeguarding 3DGS reconstruction systems and services. Our pipeline comprises two key components: a detector to identify the attacked input images with poisoned textures and a purifier to recover the benign images from their attacked counterparts, mitigating the adverse effects of these attacks. Moreover, we incorporate adversarial training into the purifier to enforce distributional alignment between the recovered and original natural images, thereby enhancing the defense efficacy. Experimental results demonstrate that our framework effectively defends against white-box, black-box, and adaptive attacks in 3DGS systems, achieving state-of-the-art performance in both safety and utility.


Key Contributions

  • First black-box defense framework (RemedyGS) against computation cost attacks on 3D Gaussian Splatting systems
  • Two-component pipeline combining an adversarial input detector (identifies poisoned-texture images) and an image purifier that recovers benign inputs
  • Adversarial training integrated into the purifier to enforce distributional alignment between recovered and natural images, improving defense efficacy against adaptive attacks

🛡️ Threat Analysis

Input Manipulation Attack

The attacks craft adversarial input images with poisoned textures to manipulate the 3DGS pipeline into excessive computation (DoS). The defense directly addresses this through adversarial input detection and input purification with adversarial training — both canonical ML01 defense mechanisms explicitly listed under this category.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxblack_boxinference_timedigital
Applications
3d reconstructionneural rendering3d gaussian splatting