Unknown Unknowns: Why Hidden Intentions in LLMs Evade Detection
Devansh Srivastav , David Pape , Lea Schönherr
Published on arXiv
2601.18552
Model Poisoning
OWASP ML Top 10 — ML10
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Detection of hidden intentions in LLMs collapses under realistic low-prevalence open-world conditions — auditing requires vanishingly small false positive rates or strong priors on manipulation type to avoid failure, and all 10 hidden intention categories are found in deployed state-of-the-art LLMs.
LLMs are increasingly embedded in everyday decision-making, yet their outputs can encode subtle, unintended behaviours that shape user beliefs and actions. We refer to these covert, goal-directed behaviours as hidden intentions, which may arise from training and optimisation artefacts, or be deliberately induced by an adversarial developer, yet remain difficult to detect in practice. We introduce a taxonomy of ten categories of hidden intentions, grounded in social science research and organised by intent, mechanism, context, and impact, shifting attention from surface-level behaviours to design-level strategies of influence. We show how hidden intentions can be easily induced in controlled models, providing both testbeds for evaluation and demonstrations of potential misuse. We systematically assess detection methods, including reasoning and non-reasoning LLM judges, and find that detection collapses in realistic open-world settings, particularly under low-prevalence conditions, where false positives overwhelm precision and false negatives conceal true risks. Stress tests on precision-prevalence and precision-FNR trade-offs reveal why auditing fails without vanishingly small false positive rates or strong priors on manipulation types. Finally, a qualitative case study shows that all ten categories manifest in deployed, state-of-the-art LLMs, emphasising the urgent need for robust frameworks. Our work provides the first systematic analysis of detectability failures of hidden intentions in LLMs under open-world settings, offering a foundation for understanding, inducing, and stress-testing such behaviours, and establishing a flexible taxonomy for anticipating evolving threats and informing governance.
Key Contributions
- A taxonomy of 10 categories of hidden intentions in LLMs grounded in social science, organized by intent, mechanism, context, and impact
- Controlled induction of hidden intentions in LLMs, producing testbeds for evaluation and demonstrations of adversarial developer misuse
- Systematic stress-testing showing detection by LLM judges collapses under low-prevalence open-world conditions due to precision-FNR trade-offs, with a qualitative case study confirming all 10 categories appear in deployed state-of-the-art LLMs
🛡️ Threat Analysis
Hidden intentions deliberately induced by adversarial developers are functionally analogous to backdoors/trojans: covert, goal-directed behaviors embedded in the model at training time that activate contextually and resist standard detection — the paper explicitly frames this as an adversarial developer threat model.