defense arXiv Sep 30, 2025 · Sep 2025
Alexander Branch, Omead Pooladzandi, Radin Khosraviani et al. · University of California · California Institute of Technology
Defends image classifiers against poisoning and backdoor attacks via VQ-VAE bottleneck that destroys fine-grained trigger patterns pre-training
Data Poisoning Attack Model Poisoning vision
We introduce PureVQ-GAN, a defense against data poisoning that forces backdoor triggers through a discrete bottleneck using Vector-Quantized VAE with GAN discriminator. By quantizing poisoned images through a learned codebook, PureVQ-GAN destroys fine-grained trigger patterns while preserving semantic content. A GAN discriminator ensures outputs match the natural image distribution, preventing reconstruction of out-of-distribution perturbations. On CIFAR-10, PureVQ-GAN achieves 0% poison success rate (PSR) against Gradient Matching and Bullseye Polytope attacks, and 1.64% against Narcissus while maintaining 91-95% clean accuracy. Unlike diffusion-based defenses requiring hundreds of iterative refinement steps, PureVQ-GAN is over 50x faster, making it practical for real training pipelines.
gan cnn University of California · California Institute of Technology