defense 2025

Detecting Sleeper Agents in Large Language Models via Semantic Drift Analysis

Shahin Zanbaghi , Ryan Rostampour , Farhan Abid , Salim Al Jarmakani

0 citations · 9 references · arXiv

α

Published on arXiv

2511.15992

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Combined detection achieves 92.5% accuracy with 100% precision and 85% recall on a backdoored LLaMA-3-8B model, operating in real-time under 1 second per query.

Semantic Drift Analysis with Canary Baseline Comparison

Novel technique introduced


Large Language Models (LLMs) can be backdoored to exhibit malicious behavior under specific deployment conditions while appearing safe during training a phenomenon known as "sleeper agents." Recent work by Hubinger et al. demonstrated that these backdoors persist through safety training, yet no practical detection methods exist. We present a novel dual-method detection system combining semantic drift analysis with canary baseline comparison to identify backdoored LLMs in real-time. Our approach uses Sentence-BERT embeddings to measure semantic deviation from safe baselines, complemented by injected canary questions that monitor response consistency. Evaluated on the official Cadenza-Labs dolphin-llama3-8B sleeper agent model, our system achieves 92.5% accuracy with 100% precision (zero false positives) and 85% recall. The combined detection method operates in real-time (<1s per query), requires no model modification, and provides the first practical solution to LLM backdoor detection. Our work addresses a critical security gap in AI deployment and demonstrates that embedding-based detection can effectively identify deceptive model behavior without sacrificing deployment efficiency.


Key Contributions

  • Dual-method detection system combining Sentence-BERT semantic drift analysis with canary baseline comparison to identify backdoored LLMs at inference time without model modification
  • Achieves 92.5% accuracy, 100% precision (zero false positives), and 85% recall on the Cadenza-Labs dolphin-llama3-8B sleeper agent model in real-time (<1s per query)
  • First practical real-time detection solution for LLM sleeper agents with open-source implementation

🛡️ Threat Analysis

Model Poisoning

The paper directly defends against backdoor/trojan behavior in LLMs — specifically 'sleeper agents' that activate malicious behavior under deployment conditions. The proposed detection system (semantic drift + canary comparison) is a countermeasure against ML10-class attacks, evaluated on a known backdoored model.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timetraining_timeblack_box
Datasets
Cadenza-Labs dolphin-llama3-8B (sleeper agent model)
Applications
llm backdoor detectionai safety monitoringproduction llm deployment