defense 2025

Black-box Detection of LLM-generated Text Using Generalized Jensen-Shannon Divergence

Shuangyi Chen , Ashish Khisti

0 citations · 44 references · arXiv

α

Published on arXiv

2510.07500

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

SurpMark consistently matches or surpasses regeneration-based and global-statistic baselines across multiple datasets and source models without requiring per-instance contrastive generation

SurpMark

Novel technique introduced


We study black-box detection of machine-generated text under practical constraints: the scoring model (proxy LM) may mismatch the unknown source model, and per-input contrastive generation is costly. We propose SurpMark, a reference-based detector that summarizes a passage by the dynamics of its token surprisals. SurpMark quantizes surprisals into interpretable states, estimates a state-transition matrix for the test text, and scores it via a generalized Jensen-Shannon (GJS) gap between the test transitions and two fixed references (human vs. machine) built once from historical corpora. We prove a principled discretization criterion and establish the asymptotic normality of the decision statistic. Empirically, across multiple datasets, source models, and scenarios, SurpMark consistently matches or surpasses baselines; our experiments corroborate the statistic's asymptotic normality, and ablations validate the effectiveness of the proposed discretization.


Key Contributions

  • SurpMark: a reference-based, black-box LLM-text detector that uses token surprisal quantization and state-transition matrices scored via generalized Jensen-Shannon divergence — requiring no per-instance regeneration
  • Theoretical analysis providing a principled discretization criterion (bias-variance trade-off for optimal bin count) and asymptotic normality of the decision statistic
  • Comprehensive empirical evaluation showing SurpMark matches or surpasses baselines across multiple datasets, source models, and proxy-model mismatch scenarios

🛡️ Threat Analysis

Output Integrity Attack

Proposes a novel AI-generated content detection method (SurpMark) that distinguishes human-written from machine-generated text — a core output integrity and content provenance concern. The paper introduces a new detection architecture with theoretical guarantees, not merely applying existing detection to a new domain.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
ai-generated text detectionllm content attributionacademic integrity