Xiangzheng Zhang

Papers in Database (3)

attack arXiv Mar 7, 2026 · 4w ago

Two Frames Matter: A Temporal Attack for Text-to-Video Model Jailbreaking

Moyang Chen, Zonghao Ying, Wenzhuo Xu et al. · Wenzhou-Kean University · 360 AI Security Lab +1 more

Jailbreaks text-to-video models by exploiting temporal infilling: sparse boundary-frame prompts induce harmful intermediate content generation

Prompt Injection multimodalgenerative
PDF
attack arXiv Sep 8, 2025 · Sep 2025

Mask-GCG: Are All Tokens in Adversarial Suffixes Necessary for Jailbreak Attacks?

Junjie Mu, Zonghao Ying, Zhekui Fan et al. · Beihang University · 360 AI Security Lab +4 more

Identifies redundant tokens in GCG adversarial suffixes via learnable masking, reducing LLM jailbreak attack time by 16.8%.

Input Manipulation Attack Prompt Injection nlp
PDF
attack arXiv Mar 10, 2026 · 27d ago

Reasoning-Oriented Programming: Chaining Semantic Gadgets to Jailbreak Large Vision Language Models

Quanchen Zou, Moyang Chen, Zonghao Ying et al. · 360 AI Security Lab · Wenzhou-Kean University +1 more

Jailbreaks VLMs by chaining semantically benign visual gadgets via prompt-controlled reasoning to synthesize harmful outputs, bypassing perception-level alignment

Input Manipulation Attack Prompt Injection visionnlpmultimodal
PDF