defense 2025

Broken-Token: Filtering Obfuscated Prompts by Counting Characters-Per-Token

Shaked Zychlinski , Yuval Kainan

0 citations · 21 references · arXiv

α

Published on arXiv

2510.26847

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

A simple CPT threshold achieves near-perfect accuracy in identifying cipher/encoded prompts across numerous encoding schemes, even for very short inputs, with negligible computational cost compared to perplexity- or LLM-based detectors.

CPT-Filtering

Novel technique introduced


Large Language Models (LLMs) are susceptible to jailbreak attacks where malicious prompts are disguised using ciphers and character-level encodings to bypass safety guardrails. While these guardrails often fail to interpret the encoded content, the underlying models can still process the harmful instructions. We introduce CPT-Filtering, a novel, model-agnostic with negligible-costs and near-perfect accuracy guardrail technique that aims to mitigate these attacks by leveraging the intrinsic behavior of Byte-Pair Encoding (BPE) tokenizers. Our method is based on the principle that tokenizers, trained on natural language, represent out-of-distribution text, such as ciphers, using a significantly higher number of shorter tokens. Our technique uses a simple yet powerful artifact of using language models: the average number of Characters Per Token (CPT) in the text. This approach is motivated by the high compute cost of modern methods - relying on added modules such as dedicated LLMs or perplexity models. We validate our approach across a large dataset of over 100,000 prompts, testing numerous encoding schemes with several popular tokenizers. Our experiments demonstrate that a simple CPT threshold robustly identifies encoded text with high accuracy, even for very short inputs. CPT-Filtering provides a practical defense layer that can be immediately deployed for real-time text filtering and offline data curation.


Key Contributions

  • CPT-Filtering: a model-agnostic, near-zero-cost guardrail that detects obfuscated/encoded prompts by measuring the average Characters Per Token (CPT) ratio from BPE tokenizers
  • Empirical validation across 100,000+ prompts spanning numerous cipher and character-level encoding schemes across multiple popular tokenizers
  • Public dataset (jfrog/obfuscation-identification on HuggingFace) for benchmarking obfuscation detection

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
jfrog/obfuscation-identification (100,000+ prompts)
Applications
llm safety guardrailsreal-time text filteringoffline data curation