defense 2025

Simplex-Optimized Hybrid Ensemble for Large Language Model Text Detection Under Generative Distribution Drif

Sepyan Purnama Kristanto , Lutfi Hakim , Dianni Yusuf

0 citations · 28 references · arXiv

α

Published on arXiv

2511.22153

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieves 94.2% accuracy and AUC of 0.978 on a 30,000-document corpus including unseen LLM families and paraphrased attack variants, with reduced false positives on scientific articles versus baselines.

Simplex-Optimized Hybrid Ensemble

Novel technique introduced


The widespread adoption of large language models (LLMs) has made it difficult to distinguish human writing from machine-produced text in many real applications. Detectors that were effective for one generation of models tend to degrade when newer models or modified decoding strategies are introduced. In this work, we study this lack of stability and propose a hybrid ensemble that is explicitly designed to cope with changing generator distributions. The ensemble combines three complementary components: a RoBERTa-based classifier fine-tuned for supervised detection, a curvature-inspired score based on perturbing the input and measuring changes in model likelihood, and a compact stylometric model built on hand-crafted linguistic features. The outputs of these components are fused on the probability simplex, and the weights are chosen via validation-based search. We frame this approach in terms of variance reduction and risk under mixtures of generators, and show that the simplex constraint provides a simple way to trade off the strengths and weaknesses of each branch. Experiments on a 30000 document corpus drawn from several LLM families including models unseen during training and paraphrased attack variants show that the proposed method achieves 94.2% accuracy and an AUC of 0.978. The ensemble also lowers false positives on scientific articles compared to strong baselines, which is critical in educational and research settings where wrongly flagging human work is costly


Key Contributions

  • Hybrid ensemble combining RoBERTa classifier, perturbation-based curvature score, and stylometric classifier with simplex-constrained fusion weights selected via validation-based grid search
  • Theoretical framing of LLM text detection as classification under generative distribution drift, analyzing risk and variance for detectors across shifting generator families
  • GenDrift-30K dataset of 30,000 documents separating in-distribution and out-of-distribution generators including paraphrased attack variants for systematic cross-generator evaluation

🛡️ Threat Analysis

Output Integrity Attack

Proposes a novel AI-generated content detection system for LLM text — directly addresses output integrity and content authenticity by distinguishing machine-produced from human-written text, including robustness to unseen generators and paraphrase attacks.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
GenDrift-30K
Applications
ai-generated text detectionacademic integrityauthorship verification