MarkSweep: A No-box Removal Attack on AI-Generated Image Watermarking via Noise Intensification and Frequency-aware Denoising
Jie Cao , Zelin Zhang , Qi Li , Jianbing Ni
Published on arXiv
2602.15364
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Reduces bit accuracy of HiDDeN and Stable Signature watermarking schemes to below 67% (below detection threshold) while preserving perceptual image quality, running in under 1 second per image
MarkSweep
Novel technique introduced
AI watermarking embeds invisible signals within images to provide provenance information and identify content as AI-generated. In this paper, we introduce MarkSweep, a novel watermark removal attack that effectively erases the embedded watermarks from AI-generated images without degrading visual quality. MarkSweep first amplifies watermark noise in high-frequency regions via edge-aware Gaussian perturbations and injects it into clean images for training a denoising network. This network then integrates two modules, the learnable frequency decomposition module and the frequency-aware fusion module, to suppress amplified noise and eliminate watermark traces. Theoretical analysis and extensive experiments demonstrate that invisible watermarks are highly vulnerable to MarkSweep, which effectively removes embedded watermarks, reducing the bit accuracy of HiDDeN and Stable Signature watermarking schemes to below 67%, while preserving perceptual quality of AI-generated images.
Key Contributions
- No-box watermark removal requiring only target watermarked images — no access to watermark extractor, model parameters, or paired clean images
- Edge-aware Gaussian perturbation strategy that amplifies watermark noise in high-frequency regions to enable effective denoising network training
- End-to-end denoising network with Learnable Frequency Decomposition Module (LFDM) and Frequency-aware Fusion Module (FaFM) that reconstructs watermark-free images in under 1 second
🛡️ Threat Analysis
MarkSweep attacks content watermarks embedded in AI-generated image outputs to defeat provenance attribution — a classic watermark removal attack on output integrity. The watermarks are in the content (images), not model weights, making this ML09. Per taxonomy: removing/defeating content watermarks via denoising is ML09 regardless of the underlying perturbation-based protection mechanism.