attack 2026

CaptionFool: Universal Image Captioning Model Attacks

Swapnil Parekh

0 citations

α

Published on arXiv

2603.00529

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 94–96% success rate in generating arbitrary target captions (including offensive content) on BLIP by perturbing only 7 out of 577 image patches.

CaptionFool

Novel technique introduced


Image captioning models are encoder-decoder architectures trained on large-scale image-text datasets, making them susceptible to adversarial attacks. We present CaptionFool, a novel universal (input-agnostic) adversarial attack against state-of-the-art transformer-based captioning models. By modifying only 7 out of 577 image patches (approximately 1.2% of the image), our attack achieves 94-96% success rate in generating arbitrary target captions, including offensive content. We further demonstrate that CaptionFool can generate "slang" terms specifically designed to evade existing content moderation filters. Our findings expose critical vulnerabilities in deployed vision-language models and underscore the urgent need for robust defenses against such attacks. Warning: This paper contains model outputs which are offensive in nature.


Key Contributions

  • CaptionFool: a universal (input-agnostic) adversarial patch attack on transformer-based image captioning models achieving 94–96% target caption success rate by modifying only 7/577 patches (~1.2% of the image)
  • Extension of Patch-Fool to the universal setting without requiring access to training data
  • Demonstration that the attack can generate adversarial slang terms to systematically evade keyword-based content moderation filters

🛡️ Threat Analysis

Input Manipulation Attack

CaptionFool crafts adversarial image patches using gradient-based attention-aware optimization (adapting Patch-Fool) to manipulate transformer-based captioning model outputs at inference time — a canonical input manipulation attack.


Details

Domains
visionnlpmultimodal
Model Types
vlmtransformer
Threat Tags
white_boxinference_timetargeteddigital
Applications
image captioningcontent moderation evasionvision-language models