attack 2026

The Truncation Blind Spot: How Decoding Strategies Systematically Exclude Human-Like Token Choices

Esteban Garces Arias 1,2, Nurzhan Sapargali 1, Christian Heumann 1, Matthias Aßenmacher 1,2

0 citations

α

Published on arXiv

2603.18482

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Analyzed 1.8M texts across 8 LLMs, 5 decoding strategies, and 53 hyperparameter configurations; found 8-18% of human tokens excluded by likelihood-based truncation, enabling reliable detection via simple classifiers


Standard decoding strategies for text generation, including top-k, nucleus sampling, and contrastive search, select tokens based on likelihood, restricting selection to high-probability regions. Human language production operates differently: tokens are chosen for communicative appropriateness rather than statistical frequency. This mismatch creates a truncation blind spot: contextually appropriate but statistically rare tokens remain accessible to humans yet unreachable by likelihood-based decoding. We hypothesize this contributes to the detectability of machine-generated text. Analyzing over 1.8 million texts across eight language models, five decoding strategies, and 53 hyperparameter configurations, we find that 8-18% of human-selected tokens fall outside typical truncation boundaries. Simple classifiers trained on predictability and lexical diversity achieve remarkable detection rates. Crucially, neither model scale nor architecture correlates strongly with detectability; truncation parameters account for most variance. Configurations achieving low detectability often produce incoherent text, indicating that evading detection and producing natural text are distinct objectives. These findings suggest detectability is enhanced by likelihood-based token selection, not merely a matter of model capability.


Key Contributions

  • Identifies truncation blind spot: 8-18% of human-selected tokens fall outside typical decoding truncation boundaries
  • Demonstrates that simple classifiers using predictability and lexical diversity achieve high detection rates for AI-generated text
  • Shows detectability is driven by decoding strategy truncation parameters rather than model scale or architecture

🛡️ Threat Analysis

Output Integrity Attack

Paper focuses on detecting AI-generated text through statistical analysis of token selection patterns. This is content authenticity/provenance — distinguishing machine-generated from human-written text using predictability and lexical diversity features.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
WikiNews
Applications
text generationcontent detection