defense 2025

AdaDetectGPT: Adaptive Detection of LLM-Generated Text with Statistical Guarantees

Hongyi Zhou 1, Jin Zhu 2, Pingfan Su 3, Ying Yang 2, Erhan Xu 1, Shakeel A O B Gavioli-Akilagun 4, Chengchun Shi 2

5 citations · 1 influential · 98 references · arXiv

α

Published on arXiv

2510.01268

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

AdaDetectGPT improves state-of-the-art logits-based detectors by up to 37% AUC in white-box and up to 20% in black-box settings while providing formal finite-sample error rate guarantees

AdaDetectGPT

Novel technique introduced


We study the problem of determining whether a piece of text has been authored by a human or by a large language model (LLM). Existing state of the art logits-based detectors make use of statistics derived from the log-probability of the observed text evaluated using the distribution function of a given source LLM. However, relying solely on log probabilities can be sub-optimal. In response, we introduce AdaDetectGPT -- a novel classifier that adaptively learns a witness function from training data to enhance the performance of logits-based detectors. We provide statistical guarantees on its true positive rate, false positive rate, true negative rate and false negative rate. Extensive numerical studies show AdaDetectGPT nearly uniformly improves the state-of-the-art method in various combination of datasets and LLMs, and the improvement can reach up to 37\%. A python implementation of our method is available at https://github.com/Mamba413/AdaDetectGPT.


Key Contributions

  • AdaDetectGPT: an adaptive classifier that learns a witness function from training data to enhance existing logits-based detectors, optimized via a lower bound on the true negative rate
  • Finite-sample statistical guarantees on TPR, FPR, TNR, and FNR — filling a gap in theoretical analysis for logits-based AI text detectors
  • Empirical improvements of up to 37% AUC over SOTA in white-box settings and up to 20% in black-box settings across multiple datasets and LLMs

🛡️ Threat Analysis

Output Integrity Attack

Directly addresses AI-generated text detection — distinguishing human-authored from LLM-generated text is a core output integrity and content provenance problem. The paper proposes a novel detection architecture (AdaDetectGPT) with formal statistical guarantees on error rates, which is a primary ML09 contribution, not a mere application of existing methods.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_time
Applications
ai-generated text detectionacademic integritymisinformation detection