Xiaojun Jia

h-index: 1 3 citations 4 papers (total)

Papers in Database (3)

attack arXiv Dec 23, 2025 · Dec 2025

Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography

Songze Li, Jiameng Cheng, Yiming Li et al. · Southeast University · Nanyang Technological University

Dual steganography hides malicious prompts and harmful responses inside images to jailbreak GPT-4o, Gemini, and Grok-3 at 99% success rate

Input Manipulation Attack Prompt Injection visionnlpmultimodal
3 citations PDF Code
attack arXiv Jan 9, 2026 · 12w ago

Knowledge-Driven Multi-Turn Jailbreaking on Large Language Models

Songze Li, Ruishi He, Xiaojun Jia et al. · Southeast University · Nanyang Technological University +1 more

Proposes Mastermind, a hierarchical multi-agent jailbreak framework that autonomously learns and adapts attack strategies across multi-turn LLM conversations

Prompt Injection nlp
1 citations PDF
attack arXiv Jan 19, 2026 · 11w ago

CODE: A Contradiction-Based Deliberation Extension Framework for Overthinking Attacks on Retrieval-Augmented Generation

Xiaolei Zhang, Xiaojun Jia, Liquan Chen et al. · Southeast University · Nanyang Technological University

Poisons RAG knowledge bases with contradiction-laden documents to cause 5–25x reasoning token overconsumption in LLMs without affecting accuracy

Prompt Injection Model Denial of Service nlp
PDF