tool 2025

Zero-Shot Statistical Tests for LLM-Generated Text Detection using Finite Sample Concentration Inequalities

Tara Radvand , Mojtaba Abdolmaleki , Mohamed Mostagir , Ambuj Tewari

3 citations · 93 references · arXiv

α

Published on arXiv

2501.02406

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieves 82.5% average TPR at a fixed 5% FPR in black-box settings, with type I and type II errors proven to decrease exponentially with text length, outperforming all non-commercial baselines.


Verifying the provenance of content is crucial to the function of many organizations, e.g., educational institutions, social media platforms, firms, etc. This problem is becoming increasingly challenging as text generated by Large Language Models (LLMs) becomes almost indistinguishable from human-generated content. In addition, many institutions utilize in-house LLMs and want to ensure that external, non-sanctioned LLMs do not produce content within the institution. In this paper, we answer the following question: Given a piece of text, can we identify whether it was produced by a particular LLM or not? We model LLM-generated text as a sequential stochastic process with complete dependence on history. We then design zero-shot statistical tests to (i) distinguish between text generated by two different known sets of LLMs $A$ (non-sanctioned) and $B$ (in-house), and (ii) identify whether text was generated by a known LLM or generated by any unknown model, e.g., a human or some other language generation process. We prove that the type I and type II errors of our test decrease exponentially with the length of the text. For that, we show that if $B$ generates the text, then except with an exponentially small probability in string length, the log-perplexity of the string under $A$ converges to the average cross-entropy of $B$ and $A$. We then present experiments using LLMs with white-box access to support our theoretical results and empirically examine the robustness of our results to black-box settings and adversarial attacks. In the black-box setting, our method achieves an average TPR of 82.5\% at a fixed FPR of 5\%. Under adversarial perturbations, our minimum TPR is 48.6\% at the same FPR threshold. Both results outperform all non-commercial baselines. See https://github.com/TaraRadvand74/llm-text-detection for code, data, and an online demo of the project.


Key Contributions

  • Zero-shot statistical tests based on log-perplexity and concentration inequalities that distinguish between (i) two known LLM sets and (ii) a known LLM vs. human/unknown source, with provably exponentially decreasing Type I and Type II errors in text length
  • Theoretical result showing log-perplexity converges to average cross-entropy except with exponentially small probability in string length, grounding the detection tests in information-theoretic guarantees
  • Empirical validation achieving 82.5% TPR at 5% FPR in black-box settings and 48.6% TPR under adversarial perturbations, outperforming all non-commercial baselines, with a public code repo and online demo

🛡️ Threat Analysis

Output Integrity Attack

Directly targets AI-generated content detection and output provenance — the paper's primary contribution is statistical tests to verify whether text originated from a specific LLM versus a human or different model, which is output integrity and content authentication.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_time
Datasets
GPT-2 family (small/medium/large/XL)GPT-Neo
Applications
ai-generated text detectioncontent provenance verificationacademic integritysocial media moderationai regulation compliance