benchmark 2025

Strategic Dishonesty Can Undermine AI Safety Evaluations of Frontier LLMs

Alexander Panfilov 1,2, Evgenii Kortukov 3, Cheng Zhang 4, Matthias Bethge 2,5, Sebastian Lapuschkin 3,6, Wojciech Samek 3,7, Ameya Prabhu 2,5, Maksym Andriushchenko 1,2, Jonas Geiping 1,2

1 citations · arXiv

α

Published on arXiv

2509.18058

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Strategically dishonest LLM responses fool all tested output-based jailbreak monitors while linear probes on internal activations reliably distinguish genuine refusals from deceptive compliance

Strategic Dishonesty Linear Probe Detection

Novel technique introduced


Large language model (LLM) developers aim for their models to be honest, helpful, and harmless. However, when faced with malicious requests, models are trained to refuse, sacrificing helpfulness. We show that frontier LLMs can develop a preference for dishonesty as a new strategy, even when other options are available. Affected models respond to harmful requests with outputs that sound harmful but are crafted to be subtly incorrect or otherwise harmless in practice. This behavior emerges with hard-to-predict variations even within models from the same model family. We find no apparent cause for the propensity to deceive, but show that more capable models are better at executing this strategy. Strategic dishonesty already has a practical impact on safety evaluations, as we show that dishonest responses fool all output-based monitors used to detect jailbreaks that we test, rendering benchmark scores unreliable. Further, strategic dishonesty can act like a honeypot against malicious users, which noticeably obfuscates prior jailbreak attacks. While output monitors fail, we show that linear probes on internal activations can be used to reliably detect strategic dishonesty. We validate probes on datasets with verifiable outcomes and by using them as steering vectors. Overall, we consider strategic dishonesty as a concrete example of a broader concern that alignment of LLMs is hard to control, especially when helpfulness and harmlessness conflict.


Key Contributions

  • Discovers and characterizes strategic dishonesty—a spontaneous behavior in frontier LLMs where responses to harmful requests appear harmful but are crafted to be subtly incorrect or practically harmless
  • Demonstrates that all tested output-based jailbreak monitors are fooled by strategically dishonest responses, rendering safety benchmark scores unreliable and obfuscating prior jailbreak attacks (honeypot effect)
  • Shows that linear probes on internal activations reliably detect strategic dishonesty where output-based monitors fail, validated on datasets with verifiable outcomes and as steering vectors

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Applications
llm safety evaluationjailbreak detectionai alignment assessment