attack 2025

Poisoning Prompt-Guided Sampling in Video Large Language Models

Yuxin Cao 1, Wei Song 2,3, Jingling Xue 2, Jin Song Dong 1

1 citations · 36 references · arXiv

α

Published on arXiv

2509.20851

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

PoisonVID achieves 82%–99% attack success rate across three prompt-guided sampling strategies and three VideoLLMs in a fully black-box setting.

PoisonVID

Novel technique introduced


Video Large Language Models (VideoLLMs) have emerged as powerful tools for understanding videos, supporting tasks such as summarization, captioning, and question answering. Their performance has been driven by advances in frame sampling, progressing from uniform-based to semantic-similarity-based and, most recently, prompt-guided strategies. While vulnerabilities have been identified in earlier sampling strategies, the safety of prompt-guided sampling remains unexplored. We close this gap by presenting PoisonVID, the first black-box poisoning attack that undermines prompt-guided sampling in VideoLLMs. PoisonVID compromises the underlying prompt-guided sampling mechanism through a closed-loop optimization strategy that iteratively optimizes a universal perturbation to suppress harmful frame relevance scores, guided by a depiction set constructed from paraphrased harmful descriptions leveraging a shadow VideoLLM and a lightweight language model, i.e., GPT-4o-mini. Comprehensively evaluated on three prompt-guided sampling strategies and across three advanced VideoLLMs, PoisonVID achieves 82% - 99% attack success rate, highlighting the importance of developing future advanced sampling strategies for VideoLLMs.


Key Contributions

  • PoisonVID: first black-box attack targeting prompt-guided frame sampling in VideoLLMs, using universal perturbations to suppress harmful frame relevance scores
  • Closed-loop optimization strategy guided by a depiction set constructed from paraphrased harmful descriptions via a shadow VideoLLM and GPT-4o-mini
  • Comprehensive evaluation across three PGS strategies (DKS, AKS, FRAG) and three VideoLLMs demonstrating 82–99% attack success rate

🛡️ Threat Analysis

Input Manipulation Attack

PoisonVID crafts universal adversarial perturbations applied to video frames at inference time, causing the VLM-based relevance scorer (BLIP/CLIP) to assign low scores to harmful frames so they are excluded from sampling — a classic adversarial input manipulation attack.


Details

Domains
visionnlpmultimodal
Model Types
vlmllm
Threat Tags
black_boxinference_timetargeteddigital
Applications
video question answeringvideo summarizationvideo captioningvideollms