tool 2026

Detecting LLM-Generated Text with Performance Guarantees

Hongyi Zhou 1, Jin Zhu 2, Erhan Xu 1, Chengchun Shi 3

3 citations · 61 references · arXiv

α

Published on arXiv

2601.06586

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieves higher classification accuracy than existing detectors while maintaining type-I error control and computational efficiency without relying on watermarks or LLM-specific auxiliary information.

StatDetectLLM

Novel technique introduced


Large language models (LLMs) such as GPT, Claude, Gemini, and Grok have been deeply integrated into our daily life. They now support a wide range of tasks -- from dialogue and email drafting to assisting with teaching and coding, serving as search engines, and much more. However, their ability to produce highly human-like text raises serious concerns, including the spread of fake news, the generation of misleading governmental reports, and academic misconduct. To address this practical problem, we train a classifier to determine whether a piece of text is authored by an LLM or a human. Our detector is deployed on an online CPU-based platform https://huggingface.co/spaces/stats-powered-ai/StatDetectLLM, and contains three novelties over existing detectors: (i) it does not rely on auxiliary information, such as watermarks or knowledge of the specific LLM used to generate the text; (ii) it more effectively distinguishes between human- and LLM-authored text; and (iii) it enables statistical inference, which is largely absent in the current literature. Empirically, our classifier achieves higher classification accuracy compared to existing detectors, while maintaining type-I error control, high statistical power, and computational efficiency.


Key Contributions

  • Watermark-free LLM-generated text detector that requires no knowledge of the specific LLM or its internal components
  • Statistical inference framework for detection with formal type-I error control and high statistical power — largely absent from prior detectors
  • Deployed online CPU-based detection platform (HuggingFace) achieving higher classification accuracy than existing baselines

🛡️ Threat Analysis

Output Integrity Attack

Directly addresses AI-generated content detection — training a novel classifier to distinguish LLM-authored from human-authored text is a core ML09 contribution (output integrity/content provenance). The paper introduces a new detection architecture with statistical inference capabilities, not merely applying an existing method to a specific domain.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Applications
llm-generated text detectionacademic integritymisinformation detection