DualSentinel: A Lightweight Framework for Detecting Targeted Attacks in Black-box LLM via Dual Entropy Lull Pattern
Xiaoyi Pang 1, Xuanyi Hao 2,3, Pengyu Liu 1, Qi Luo 1, Song Guo 1, Zhibo Wang 2,3
Published on arXiv
2603.01574
Model Poisoning
OWASP ML Top 10 — ML10
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
DualSentinel achieves superior detection accuracy against both backdoor and prompt injection attacks with near-zero false positives and negligible computational overhead in black-box LLM settings.
DualSentinel (Entropy Lull detection + task-flipping verification)
Novel technique introduced
Recent intelligent systems integrate powerful Large Language Models (LLMs) through APIs, but their trustworthiness may be critically undermined by targeted attacks like backdoor and prompt injection attacks, which secretly force LLMs to generate specific malicious sequences. Existing defensive approaches for such threats typically rely on high access rights, impose prohibitive costs, and hinder normal inference, rendering them impractical for real-world scenarios. To solve these limitations, we introduce DualSentinel, a lightweight and unified defense framework that can accurately and promptly detect the activation of targeted attacks alongside the LLM generation process. We first identify a characteristic of compromised LLMs, termed Entropy Lull: when a targeted attack successfully hijacks the generation process, the LLM exhibits a distinct period of abnormally low and stable token probability entropy, indicating it is following a fixed path rather than making creative choices. DualSentinel leverages this pattern by developing an innovative dual-check approach. It first employs a magnitude and trend-aware monitoring method to proactively and sensitively flag an entropy lull pattern at runtime. Upon such flagging, it triggers a lightweight yet powerful secondary verification based on task-flipping. An attack is confirmed only if the entropy lull pattern persists across both the original and the flipped task, proving that the LLM's output is coercively controlled. Extensive evaluations show that DualSentinel is both highly effective (superior detection accuracy with near-zero false positives) and remarkably efficient (negligible additional cost), offering a truly practical path toward securing deployed LLMs. The source code can be accessed at https://doi.org/10.5281/zenodo.18479273.
Key Contributions
- Identifies the 'Entropy Lull' phenomenon: compromised LLMs exhibit abnormally low and stable token probability entropy during targeted attack activation
- Proposes a dual-check detection pipeline — magnitude/trend-aware entropy monitoring flagging, followed by lightweight task-flipping verification to confirm attack with near-zero false positives
- Demonstrates a lightweight, unified black-box defense against both backdoor and prompt injection attacks with negligible inference overhead
🛡️ Threat Analysis
DualSentinel is explicitly designed to detect backdoor attacks in LLMs — the 'Entropy Lull' phenomenon is identified as a signature of backdoor activation during generation. Detecting and defending against backdoor/trojan behavior in neural models is the core ML10 threat.