defense 2025

RepreGuard: Detecting LLM-Generated Text by Revealing Hidden Representation Patterns

Xin Chen 1,2, Junchao Wu 1, Shu Yang 3, Runzhe Zhan 1, Zeyu Wu 1, Ziyang Luo 4, Di Wang 3, Min Yang 2, Lidia S. Chao 1, Derek F. Wong 1

0 citations

α

Published on arXiv

2508.13152

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

RepreGuard achieves 94.92% AUROC averaged across in-distribution and out-of-distribution scenarios, outperforming all baselines while remaining robust to text size variation and mainstream evasion attacks.

RepreGuard

Novel technique introduced


Detecting content generated by large language models (LLMs) is crucial for preventing misuse and building trustworthy AI systems. Although existing detection methods perform well, their robustness in out-of-distribution (OOD) scenarios is still lacking. In this paper, we hypothesize that, compared to features used by existing detection methods, the internal representations of LLMs contain more comprehensive and raw features that can more effectively capture and distinguish the statistical pattern differences between LLM-generated texts (LGT) and human-written texts (HWT). We validated this hypothesis across different LLMs and observed significant differences in neural activation patterns when processing these two types of texts. Based on this, we propose RepreGuard, an efficient statistics-based detection method. Specifically, we first employ a surrogate model to collect representation of LGT and HWT, and extract the distinct activation feature that can better identify LGT. We can classify the text by calculating the projection score of the text representations along this feature direction and comparing with a precomputed threshold. Experimental results show that RepreGuard outperforms all baselines with average 94.92% AUROC on both in-distribution (ID) and OOD scenarios, while also demonstrating robust resilience to various text sizes and mainstream attacks. Data and code are publicly available at: https://github.com/NLP2CT/RepreGuard


Key Contributions

  • Empirical validation that LLM internal neural activation patterns differ significantly between LLM-generated and human-written text, providing richer features than surface-level signals
  • RepreGuard: a statistics-based detector that extracts a discriminative activation feature direction from a surrogate model and classifies text via projection score comparison against a precomputed threshold
  • Achieves 94.92% average AUROC across both in-distribution and OOD scenarios while demonstrating robustness to varying text lengths and mainstream evasion attacks

🛡️ Threat Analysis

Output Integrity Attack

Directly proposes a novel AI-generated text detection method — distinguishing LLM-generated text from human-written text is explicitly an output integrity and content provenance problem covered by ML09.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Applications
ai-generated text detectionllm content verificationacademic integrity