Md Jueal Mia

h-index: 11 281 citations 30 papers (total)

Papers in Database (2)

attack arXiv Sep 24, 2025 · Sep 2025

JaiLIP: Jailbreaking Vision-Language Models via Loss Guided Image Perturbation

Md Jueal Mia, M. Hadi Amini · Florida International University

Gradient-optimized adversarial image perturbations that jailbreak VLMs by jointly minimizing MSE and harmful-output loss

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF
attack arXiv Nov 17, 2025 · Nov 2025

Jailbreaking Large Vision Language Models in Intelligent Transportation Systems

Badhan Chandra Das, Md Tasnim Jawad, Md Jueal Mia et al. · Florida International University

Jailbreaks LVLMs in transportation contexts using image typography tricks and multi-turn prompting, plus a filtering-based defense

Prompt Injection multimodalvisionnlp
PDF