defense 2025

Prefix Probing: Lightweight Harmful Content Detection for Large Language Models

Jirui Yang 1, Hengqi Guo 1, Zhihui Lu 1, Yi Zhao 1, Yuansen Zhang 2, Shijing Hu 1, Qiang Duan 3, Yinggui Wang 2, Tao Wei 2

0 citations · arXiv

α

Published on arXiv

2512.16650

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Prefix Probing matches or surpasses response-process and internal-feature baselines and is competitive with external guard models across five safety benchmarks and eight LLMs, while adding only first-token-equivalent latency overhead via prefix caching.

Prefix Probing

Novel technique introduced


Large language models often face a three-way trade-off among detection accuracy, inference latency, and deployment cost when used in real-world safety-sensitive applications. This paper introduces Prefix Probing, a black-box harmful content detection method that compares the conditional log-probabilities of "agreement/execution" versus "refusal/safety" opening prefixes and leverages prefix caching to reduce detection overhead to near first-token latency. During inference, the method requires only a single log-probability computation over the probe prefixes to produce a harmfulness score and apply a threshold, without invoking any additional models or multi-stage inference. To further enhance the discriminative power of the prefixes, we design an efficient prefix construction algorithm that automatically discovers highly informative prefixes, substantially improving detection performance. Extensive experiments demonstrate that Prefix Probing achieves detection effectiveness comparable to mainstream external safety models while incurring only minimal computational cost and requiring no extra model deployment, highlighting its strong practicality and efficiency.


Key Contributions

  • Prefix Probing: a black-box harmful content detector that uses the log-probability gap between agreement-type and refusal-type response prefixes as a harmfulness score, requiring no additional models
  • Beam-search prefix construction algorithm that automatically discovers highly discriminative prefixes per backbone model, outperforming manually designed prefixes
  • Integration with prefix caching to reduce detection overhead to near first-token latency, enabling practical deployment without added inference stages

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
five public safety benchmarks (unnamed in excerpt)
Applications
llm safetyharmful content detectionjailbreak detection