benchmark 2025

Semantically Guided Adversarial Testing of Vision Models Using Language Models

Katarzyna Filus 1, Jorge M. Cruz-Duarte 2,3,4,5

0 citations

α

Published on arXiv

2508.11341

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Pretrained language and vision-language models (BERT, TinyLLAMA, CLIP) consistently outperform WordNet for adversarial target selection, especially for distant class relationships, enabling more reproducible and scalable adversarial benchmarking.


In targeted adversarial attacks on vision models, the selection of the target label is a critical yet often overlooked determinant of attack success. This target label corresponds to the class that the attacker aims to force the model to predict. Now, existing strategies typically rely on randomness, model predictions, or static semantic resources, limiting interpretability, reproducibility, or flexibility. This paper then proposes a semantics-guided framework for adversarial target selection using the cross-modal knowledge transfer from pretrained language and vision-language models. We evaluate several state-of-the-art models (BERT, TinyLLAMA, and CLIP) as similarity sources to select the most and least semantically related labels with respect to the ground truth, forming best- and worst-case adversarial scenarios. Our experiments on three vision models and five attack methods reveal that these models consistently render practical adversarial targets and surpass static lexical databases, such as WordNet, particularly for distant class relationships. We also observe that static testing of target labels offers a preliminary assessment of the effectiveness of similarity sources, \textit{a priori} testing. Our results corroborate the suitability of pretrained models for constructing interpretable, standardized, and scalable adversarial benchmarks across architectures and datasets.


Key Contributions

  • Semantics-guided framework for adversarial target label selection using cross-modal knowledge from pretrained language models (BERT, TinyLLAMA) and vision-language models (CLIP)
  • Comparative evaluation of pretrained model similarity sources vs. static lexical databases (WordNet) for constructing best-case and worst-case adversarial scenarios
  • Demonstration that pretrained model-based similarity sources yield more interpretable, standardized, and scalable adversarial benchmarks across five attack methods and three vision architectures

🛡️ Threat Analysis

Input Manipulation Attack

Paper directly addresses targeted adversarial attacks on vision models — the core contribution is a methodology for systematically selecting adversarial target labels using semantic similarity, evaluated across five gradient-based attack methods and three vision model architectures.


Details

Domains
visionnlp
Model Types
cnntransformer
Threat Tags
targeteddigitalinference_timewhite_box
Applications
image classification