attack 2026

IU: Imperceptible Universal Backdoor Attack

Hsin Lin , Yan-Lun Chen , Ren-Hung Hwang , Chia-Mu Yu

0 citations

α

Published on arXiv

2603.00711

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Achieves up to 91.3% attack success rate across all target classes with only 0.16% poisoning rate on ImageNet-1K while maintaining benign accuracy and evading SOTA defenses.

IU (Imperceptible Universal backdoor)

Novel technique introduced


Backdoor attacks pose a critical threat to the security of deep neural networks, yet existing efforts on universal backdoors often rely on visually salient patterns, making them easier to detect and less practical at scale. In this work, we introduce a novel imperceptible universal backdoor attack that simultaneously controls all target classes with minimal poisoning while preserving stealth. Our key idea is to leverage graph convolutional networks (GCNs) to model inter-class relationships and generate class-specific perturbations that are both effective and visually invisible. The proposed framework optimizes a dual-objective loss that balances stealthiness (measured by perceptual similarity metrics such as PSNR) and attack success rate (ASR), enabling scalable, multi-target backdoor injection. Extensive experiments on ImageNet-1K with ResNet architectures demonstrate that our method achieves high ASR (up to 91.3%) under poisoning rates as low as 0.16%, while maintaining benign accuracy and evading state-of-the-art defenses. These results highlight the emerging risks of invisible universal backdoors and call for more robust detection and mitigation strategies.


Key Contributions

  • GCN-based framework that models inter-class relationships to generate class-specific, imperceptible backdoor triggers for universal multi-target control
  • Dual-objective loss balancing perceptual stealthiness (PSNR) and attack success rate, enabling scalable injection across all classes
  • Achieves 91.3% ASR at an extremely low 0.16% poisoning rate on ImageNet-1K while evading state-of-the-art backdoor defenses

🛡️ Threat Analysis

Model Poisoning

Primary contribution is a backdoor/trojan attack embedding hidden, trigger-activated targeted behavior across all classes simultaneously — textbook ML10. The dual-objective loss, GCN-based trigger generation, and evasion of SOTA backdoor defenses are all backdoor-specific contributions.


Details

Domains
vision
Model Types
cnngnn
Threat Tags
training_timetargeteddigital
Datasets
ImageNet-1K
Applications
image classification