attack 2025

ZQBA: Zero Query Black-box Adversarial Attack

Joana C. Costa , Tiago Roxo , Hugo Proença , Pedro R. M. Inácio

1 citations · 31 references · arXiv

α

Published on arXiv

2510.00769

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

ZQBA achieves higher attack success rates than state-of-the-art single-query black-box attacks while requiring zero queries to the target model and preserving imperceptibility.

ZQBA

Novel technique introduced


Current black-box adversarial attacks either require multiple queries or diffusion models to produce adversarial samples that can impair the target model performance. However, these methods require training a surrogate loss or diffusion models to produce adversarial samples, which limits their applicability in real-world settings. Thus, we propose a Zero Query Black-box Adversarial (ZQBA) attack that exploits the representations of Deep Neural Networks (DNNs) to fool other networks. Instead of requiring thousands of queries to produce deceiving adversarial samples, we use the feature maps obtained from a DNN and add them to clean images to impair the classification of a target model. The results suggest that ZQBA can transfer the adversarial samples to different models and across various datasets, namely CIFAR and Tiny ImageNet. The experiments also show that ZQBA is more effective than state-of-the-art black-box attacks with a single query, while maintaining the imperceptibility of perturbations, evaluated both quantitatively (SSIM) and qualitatively, emphasizing the vulnerabilities of employing DNNs in real-world contexts. All the source code is available at https://github.com/Joana-Cabral/ZQBA.


Key Contributions

  • Zero-query transfer attack that requires no access to the target model — exploits feature maps from a surrogate DNN added directly to clean images
  • Demonstrated transferability across different architectures and datasets (CIFAR, Tiny ImageNet) without training a surrogate loss or diffusion model
  • Outperforms state-of-the-art single-query black-box attacks while maintaining imperceptibility measured by SSIM

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a new adversarial example generation method that crafts imperceptible perturbations at inference time to cause misclassification — a classic input manipulation attack. The transfer-based, zero-query nature is a refinement of the attack strategy, not a different threat category.


Details

Domains
vision
Model Types
cnn
Threat Tags
black_boxinference_timeuntargeteddigital
Datasets
CIFAR-10Tiny ImageNet
Applications
image classification