Lanqing Hong

h-index: 2 8 citations 3 papers (total)

Papers in Database (1)

attack arXiv Nov 20, 2025 · Nov 2025

Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language Models

Yijun Yang, Lichao Wang, Jianping Zhang et al. · The Chinese University of Hong Kong · Beijing Institute of Technology +1 more

Adversarial image attack jailbreaks GPT-4o, Gemini-Pro, and Llama-4 by hiding harmful instructions inside competing visual objectives, transferring across VLMs

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF Code