benchmark arXiv Oct 16, 2025 · Oct 2025
Anyuan Zhuo, Xuefei Ning, Ningyuan Li et al. · Shanghai University of Finance and Economics · Tsinghua University
Benchmarks LLM robustness to invisible Unicode character injection intended to block exam cheating, finding surprising resilience
Prompt Injection nlp
This work investigates the resilience of contemporary LLMs against frequent and structured character-level perturbations, specifically through the insertion of noisy characters after each input character. We introduce UCC-Inj, a practical method that inserts invisible Unicode control characters into text to discourage LLM misuse in scenarios such as online exam systems. Surprisingly, despite strong obfuscation that fragments tokenization and reduces the signal-to-noise ratio significantly, many LLMs still maintain notable performance. Through comprehensive evaluation across model-, problem-, and noise-related configurations, we examine the extent and mechanisms of this robustness, exploring both the handling of character-level tokenization and implicit versus explicit denoising mechanism hypotheses of character-level noises. We hope our findings on the low-level robustness of LLMs will shed light on the risks of their misuse and on the reliability of deploying LLMs across diverse applications.
llm Shanghai University of Finance and Economics · Tsinghua University