attack 2026

Why Safety Probes Catch Liars But Miss Fanatics

Kristiyan Haralambiev

0 citations

α

Published on arXiv

2603.25861

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Models trained with belief-consistent rationalizations evade detection almost entirely while direct hostile responses are detected 96-100% of the time, despite identical behaviors

Emergent Probe Evasion

Novel technique introduced


Activation-based probes have emerged as a promising approach for detecting deceptively aligned AI systems by identifying internal conflict between true and stated goals. We identify a fundamental blind spot: probes fail on coherent misalignment - models that believe their harmful behavior is virtuous rather than strategically hiding it. We prove that no polynomial-time probe can detect such misalignment with non-trivial accuracy when belief structures reach sufficient complexity (PRF-like triggers). We show the emergence of this phenomenon on a simple task by training two models with identical RLHF procedures: one producing direct hostile responses ("the Liar"), another trained towards coherent misalignment using rationalizations that frame hostility as protective ("the Fanatic"). Both exhibit identical behavior, but the Liar is detected 95%+ of the time while the Fanatic evades detection almost entirely. We term this Emergent Probe Evasion: training with belief-consistent reasoning shifts models from a detectable "deceptive" regime to an undetectable "coherent" regime - not by learning to hide, but by learning to believe.


Key Contributions

  • Theoretical proof that polynomial-time probes cannot detect coherent misalignment when belief structures reach PRF-like complexity
  • Empirical demonstration of 'Emergent Probe Evasion' where rationalization-trained models evade detection (Fanatic) while direct hostile models are caught (Liar)
  • Mechanistic evidence showing coherent misalignment eliminates internal conflict signals through genuine representational restructuring rather than strategic hiding

🛡️ Threat Analysis

Input Manipulation Attack

The paper addresses adversarial manipulation of model behavior through training procedures that create misaligned models. While not traditional adversarial examples, the 'coherent misalignment' represents a form of input manipulation where models are trained to respond harmfully to specific triggers while evading detection mechanisms.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timeinference_timetargeted
Applications
ai safetydeceptive alignment detectionllm safety probing