Removal Attack and Defense on AI-generated Content Latent-based Watermarking
De Zhang Lee 1, Han Fang 1, Hanyi Wang 2,1, Ee-Chien Chang 1
Published on arXiv
2509.11745
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
The proposed boundary-exploitation attack removes latent watermarks with up to 15× less distortion than white-noise baseline; the secret-transformation defense provably reduces any attacker to white-noise adversary equivalence.
Boundary-exploitation watermark removal attack with secret-transformation defense
Novel technique introduced
Digital watermarks can be embedded into AI-generated content (AIGC) by initializing the generation process with starting points sampled from a secret distribution. When combined with pseudorandom error-correcting codes, such watermarked outputs can remain indistinguishable from unwatermarked objects, while maintaining robustness under whitenoise. In this paper, we go beyond indistinguishability and investigate security under removal attacks. We demonstrate that indistinguishability alone does not necessarily guarantee resistance to adversarial removal. Specifically, we propose a novel attack that exploits boundary information leaked by the locations of watermarked objects. This attack significantly reduces the distortion required to remove watermarks -- by up to a factor of $15 \times$ compared to a baseline whitenoise attack under certain settings. To mitigate such attacks, we introduce a defense mechanism that applies a secret transformation to hide the boundary, and prove that the secret transformation effectively rendering any attacker's perturbations equivalent to those of a naive whitenoise adversary. Our empirical evaluations, conducted on multiple versions of Stable Diffusion, validate the effectiveness of both the attack and the proposed defense, highlighting the importance of addressing boundary leakage in latent-based watermarking schemes.
Key Contributions
- Novel boundary-exploitation removal attack on latent-based AIGC watermarks (PRC and Gaussian Shading) that reduces required distortion by up to 15× compared to a white-noise baseline while keeping attacked starting points indistinguishable from originals.
- Secret transformation defense that hides boundary information, provably reducing any adversary's advantage to that of a naive white-noise attacker.
- Empirical validation across multiple Stable Diffusion versions with both exact and approximate inversion settings.
🛡️ Threat Analysis
The paper targets content watermarks embedded in AI-generated image outputs (Stable Diffusion). The attack removes these output-integrity watermarks using boundary information leakage, and the defense restores watermark security — both contributions squarely address output integrity and content provenance protection.