Shouling Ji

h-index: 2 25 citations 15 papers (total)

Papers in Database (2)

attack arXiv Jan 8, 2026 · 12w ago

StealthGraph: Exposing Domain-Specific Risks in LLMs through Knowledge-Graph-Guided Harmful Prompt Generation

Huawei Zheng, Xinqi Jiang, Sen Yang et al. · Zhejiang University

Knowledge-graph-guided framework generates domain-specific implicit harmful prompts that evade LLM safety defenses in finance and healthcare

Prompt Injection nlp
1 citations PDF Code
attack arXiv Oct 20, 2025 · Oct 2025

Multimodal Safety Is Asymmetric: Cross-Modal Exploits Unlock Black-Box MLLMs Jailbreaks

Xinkai Wang, Beibei Li, Zerui Shao et al. · Sichuan University · Tianjin University +1 more

Black-box RL-based jailbreak framework exploiting multimodal safety asymmetry to achieve 95%+ attack success on GPT-4o and Gemini

Prompt Injection nlpmultimodal
1 citations PDF