Bo Gao

Papers in Database (1)

attack arXiv Sep 22, 2025 · Sep 2025

Jailbreaking LLMs via Semantically Relevant Nested Scenarios with Targeted Toxic Knowledge

Ning Xu, Bo Gao, Hui Dou · AnHui University

Automated black-box LLM jailbreak using semantically relevant nested scenarios with toxic knowledge achieves 96.69% attack success rate

Prompt Injection nlp
2 citations PDF Code