attack arXiv Nov 10, 2025 · Nov 2025
Binyan Xu, Fan Yang, Di Tang et al. · The Chinese University of Hong Kong · Sun Yat-Sen University +1 more
GCB uses conditional InfoGAN to craft clean-image backdoors via label-only poisoning, breaking the stealth-potency trade-off with under 1% accuracy drop
Model Poisoning Data Poisoning Attack vision
Clean-image backdoor attacks, which use only label manipulation in training datasets to compromise deep neural networks, pose a significant threat to security-critical applications. A critical flaw in existing methods is that the poison rate required for a successful attack induces a proportional, and thus noticeable, drop in Clean Accuracy (CA), undermining their stealthiness. This paper presents a new paradigm for clean-image attacks that minimizes this accuracy degradation by optimizing the trigger itself. We introduce Generative Clean-Image Backdoors (GCB), a framework that uses a conditional InfoGAN to identify naturally occurring image features that can serve as potent and stealthy triggers. By ensuring these triggers are easily separable from benign task-related features, GCB enables a victim model to learn the backdoor from an extremely small set of poisoned examples, resulting in a CA drop of less than 1%. Our experiments demonstrate GCB's remarkable versatility, successfully adapting to six datasets, five architectures, and four tasks, including the first demonstration of clean-image backdoors in regression and segmentation. GCB also exhibits resilience against most of the existing backdoor defenses.
cnn gan The Chinese University of Hong Kong · Sun Yat-Sen University · Zhejiang University
defense arXiv Jan 27, 2026 · 9w ago
Binyan Xu, Fan Yang, Xilin Dai et al. · The Chinese University of Hong Kong · Zhejiang University +1 more
Defends backdoored vision models at test-time using VLMs as external semantic auditors decoupled from victim model parameters
Model Poisoning vision
Deep Neural Networks remain inherently vulnerable to backdoor attacks. Traditional test-time defenses largely operate under the paradigm of internal diagnosis methods like model repairing or input robustness, yet these approaches are often fragile under advanced attacks as they remain entangled with the victim model's corrupted parameters. We propose a paradigm shift from Internal Diagnosis to External Semantic Auditing, arguing that effective defense requires decoupling safety from the victim model via an independent, semantically grounded auditor. To this end, we present a framework harnessing Universal Vision-Language Models (VLMs) as evolving semantic gatekeepers. We introduce PRISM (Prototype Refinement & Inspection via Statistical Monitoring), which overcomes the domain gap of general VLMs through two key mechanisms: a Hybrid VLM Teacher that dynamically refines visual prototypes online, and an Adaptive Router powered by statistical margin monitoring to calibrate gating thresholds in real-time. Extensive evaluation across 17 datasets and 11 attack types demonstrates that PRISM achieves state-of-the-art performance, suppressing Attack Success Rate to <1% on CIFAR-10 while improving clean accuracy, establishing a new standard for model-agnostic, externalized security.
vlm cnn transformer The Chinese University of Hong Kong · Zhejiang University · Sun Yat-Sen University