benchmark 2026

Explaining Generalization of AI-Generated Text Detectors Through Linguistic Analysis

Yuxi Xia 1,1, Kinga Stańczak 2, Benjamin Roth 1,1

0 citations · 50 references · arXiv

α

Published on arXiv

2601.07974

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Linguistic features such as passive voice ratio and short sentence ratio show Pearson correlations above 0.7 with generalization performance in specific detector-condition configurations, but no universal linguistic signal explains generalization across all settings.


AI-text detectors achieve high accuracy on in-domain benchmarks, but often struggle to generalize across different generation conditions such as unseen prompts, model families, or domains. While prior work has reported these generalization gaps, there are limited insights about the underlying causes. In this work, we present a systematic study aimed at explaining generalization behavior through linguistic analysis. We construct a comprehensive benchmark that spans 6 prompting strategies, 7 large language models (LLMs), and 4 domain datasets, resulting in a diverse set of human- and AI-generated texts. Using this dataset, we fine-tune classification-based detectors on various generation settings and evaluate their cross-prompt, cross-model, and cross-dataset generalization. To explain the performance variance, we compute correlations between generalization accuracies and feature shifts of 80 linguistic features between training and test conditions. Our analysis reveals that generalization performance for specific detectors and evaluation conditions is significantly associated with linguistic features such as tense usage and pronoun frequency.


Key Contributions

  • A new benchmark spanning 6 prompting strategies, 7 LLMs, and 4 domain datasets for evaluating cross-condition generalization of AI-text detectors
  • Large-scale correlation analysis linking shifts in 80 linguistic features to detector generalization performance across cross-prompt, cross-model, and cross-dataset conditions
  • Empirical finding that tense usage and pronoun frequency are significantly associated with generalization accuracy, though no single universal linguistic signal explains all cases

🛡️ Threat Analysis

Output Integrity Attack

AI-generated text detection is a core ML09 topic (output authenticity/provenance). This paper constructs a comprehensive benchmark to evaluate detector generalization and analyzes the linguistic root causes of failure, contributing novel interpretability insights to the AI-text detection field.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Datasets
Custom benchmark (6 prompts × 7 LLMs × 4 domains)M4GTMULTITuDEMultiSocial
Applications
ai-generated text detectioncontent moderationacademic integrity