Imperceptible Jailbreaking against Large Language Models
Kuofeng Gao 1,2, Yiming Li 3, Chao Du 2, Xin Wang 4, Xingjun Ma 4, Shu-Tao Xia 1,5, Tianyu Pang 2
Published on arXiv
2510.05025
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Invisible adversarial suffixes composed of Unicode variation selectors achieve high jailbreak attack success rates against four aligned LLMs with zero visible modification to the displayed prompt
Imperceptible Jailbreak / Chain-of-Search
Novel technique introduced
Jailbreaking attacks on the vision modality typically rely on imperceptible adversarial perturbations, whereas attacks on the textual modality are generally assumed to require visible modifications (e.g., non-semantic suffixes). In this paper, we introduce imperceptible jailbreaks that exploit a class of Unicode characters called variation selectors. By appending invisible variation selectors to malicious questions, the jailbreak prompts appear visually identical to original malicious questions on screen, while their tokenization is "secretly" altered. We propose a chain-of-search pipeline to generate such adversarial suffixes to induce harmful responses. Our experiments show that our imperceptible jailbreaks achieve high attack success rates against four aligned LLMs and generalize to prompt injection attacks, all without producing any visible modifications in the written prompt. Our code is available at https://github.com/sail-sg/imperceptible-jailbreaks.
Key Contributions
- First demonstration that invisible Unicode variation selectors can be adversarially optimized as imperceptible suffixes to circumvent LLM safety alignment
- Chain-of-search pipeline that uses bootstrapped random search to maximize target-start token log-likelihood across multiple rounds, reusing successful suffixes as initialization for failed cases
- Generalization of the attack to prompt injection scenarios, with high attack success rates against four aligned LLMs while leaving the rendered prompt visually unchanged