Reverse CAPTCHA: Evaluating LLM Susceptibility to Invisible Unicode Instruction Injection
Published on arXiv
2603.00164
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Tool use dramatically amplifies compliance with hidden Unicode instructions (Cohen's h up to 1.37), and explicit decoding hints increase compliance by up to 95 percentage points within a single model-encoding pair.
Reverse CAPTCHA
Novel technique introduced
We introduce Reverse CAPTCHA, an evaluation framework that tests whether large language models follow invisible Unicode-encoded instructions embedded in otherwise normal-looking text. Unlike traditional CAPTCHAs that distinguish humans from machines, our benchmark exploits a capability gap: models can perceive Unicode control characters that are invisible to human readers. We evaluate five models from two providers across two encoding schemes (zero-width binary and Unicode Tags), four hint levels, two payload framings, and with tool use enabled or disabled. Across 8,308 model outputs, we find that tool use dramatically amplifies compliance (Cohen's h up to 1.37, a large effect), that models exhibit provider-specific encoding preferences (OpenAI models decode zero-width binary; Anthropic models prefer Unicode Tags), and that explicit decoding instructions increase compliance by up to 95 percentage points within a single model and encoding. All pairwise model differences are statistically significant (p < 0.05, Bonferroni-corrected). These results highlight an underexplored attack surface for prompt injection via invisible Unicode payloads.
Key Contributions
- Reverse CAPTCHA: a controlled evaluation framework with 270 test cases covering two invisible encoding schemes (zero-width binary, Unicode Tags), four hint levels, and two payload framings
- First systematic comparison of five frontier models (OpenAI and Anthropic) on this attack surface, with tool-use ablation across 8,308 graded outputs, finding tool use is the dominant compliance amplifier (Cohen's h up to 1.37)
- Discovery of provider-specific encoding vulnerabilities: OpenAI models preferentially decode zero-width binary while Anthropic models prefer Unicode Tags, suggesting tokenizer or training data differences