defense 2026

SearchLLM: Detecting LLM Paraphrased Text by Measuring the Similarity with Regeneration of the Candidate Source via Search Engine

Hoang-Quoc Nguyen-Son 1,2, Minh-Son Dao 1, Koji Zettsu 1,2

0 citations · 41 references · arXiv

α

Published on arXiv

2601.16512

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

SearchLLM consistently improves the accuracy of existing LLM text detectors across multiple datasets and LLMs, including cases where paraphrased text closely mimics original human content.

SearchLLM

Novel technique introduced


With the advent of large language models (LLMs), it has become common practice for users to draft text and utilize LLMs to enhance its quality through paraphrasing. However, this process can sometimes result in the loss or distortion of the original intended meaning. Due to the human-like quality of LLM-generated text, traditional detection methods often fail, particularly when text is paraphrased to closely mimic original content. In response to these challenges, we propose a novel approach named SearchLLM, designed to identify LLM-paraphrased text by leveraging search engine capabilities to locate potential original text sources. By analyzing similarities between the input and regenerated versions of candidate sources, SearchLLM effectively distinguishes LLM-paraphrased content. SearchLLM is designed as a proxy layer, allowing seamless integration with existing detectors to enhance their performance. Experimental results across various LLMs demonstrate that SearchLLM consistently enhances the accuracy of recent detectors in detecting LLM-paraphrased text that closely mimics original content. Furthermore, SearchLLM also helps the detectors prevent paraphrasing attacks.


Key Contributions

  • SearchLLM: a search-engine-augmented proxy layer that locates candidate original source texts and regenerates them to detect LLM paraphrasing via similarity shift analysis
  • Demonstrates that LLM-paraphrased text exhibits a characteristic positive similarity shift between regenerated and source text, unlike human-written text
  • Plug-and-play proxy design that enhances any existing detector and improves robustness against paraphrasing attacks

🛡️ Threat Analysis

Output Integrity Attack

Proposes a novel AI-generated content detection method (SearchLLM) targeting LLM-paraphrased text authenticity and provenance — a core output integrity concern. Also defends against paraphrasing attacks that evade existing detectors.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
XSum
Applications
ai-generated text detectionllm paraphrase detectioncontent authenticity verification