Two Frames Matter: A Temporal Attack for Text-to-Video Model Jailbreaking
Moyang Chen 1,2, Zonghao Ying 3, Wenzhuo Xu 2, Quancheng Zou 2, Deyue Zhang 2, Dongdong Yang 2, Xiangzheng Zhang 2
Published on arXiv
2603.07028
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
TFM achieves up to a 12% absolute gain in attack success rate on commercial T2V systems by exploiting temporal trajectory infilling under fragmented, boundary-only prompts.
TFM (Two Frame Matter)
Novel technique introduced
Recent text-to-video (T2V) models can synthesize complex videos from lightweight natural language prompts, raising urgent concerns about safety alignment in the event of misuse in the real world. Prior jailbreak attacks typically rewrite unsafe prompts into paraphrases that evade content filters while preserving meaning. Yet, these approaches often still retain explicit sensitive cues in the input text and therefore overlook a more profound, video-specific weakness. In this paper, we identify a temporal trajectory infilling vulnerability of T2V systems under fragmented prompts: when the prompt specifies only sparse boundary conditions (e.g., start and end frames) and leaves the intermediate evolution underspecified, the model may autonomously reconstruct a plausible trajectory that includes harmful intermediate frames, despite the prompt appearing benign to input or output side filtering. Building on this observation, we propose TFM. This fragmented prompting framework converts an originally unsafe request into a temporally sparse two-frame extraction and further reduces overtly sensitive cues via implicit substitution. Extensive evaluations across multiple open-source and commercial T2V models demonstrate that TFM consistently enhances jailbreak effectiveness, achieving up to a 12% increase in attack success rate on commercial systems. Our findings highlight the need for temporally aware safety mechanisms that account for model-driven completion beyond prompt surface form.
Key Contributions
- Identifies a video-specific 'temporal trajectory infilling' vulnerability in T2V models: specifying only sparse boundary-frame conditions causes the model to autonomously reconstruct harmful intermediate content undetectable by input/output filters.
- Proposes TFM, a two-step fragmented prompting framework that converts an unsafe prompt into a boundary-only specification combined with implicit keyword substitution to suppress explicit sensitive cues.
- Demonstrates up to +12% absolute attack success rate improvement on commercial T2V systems (Kling, Veo2, Luma Ray2) across diverse safety categories in a strictly black-box setting.