benchmark 2026

The Facade of Truth: Uncovering and Mitigating LLM Susceptibility to Deceptive Evidence

Herun Wan 1, Jiaying Wu 2, Minnan Luo 1, Fanxiao Li 3, Zhi Zeng 1, Min-Yen Kan 2

0 citations · 76 references · arXiv

α

Published on arXiv

2601.05478

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Sophisticated fabricated evidence increases LLM belief scores in false claims by 93% on average, and in 29% of cases causes a shift from conservative to riskier downstream recommendations; DIS consistently reduces these belief shifts.

MisBelief / Deceptive Intent Shielding (DIS)

Novel technique introduced


To reliably assist human decision-making, LLMs must maintain factual internal beliefs against misleading injections. While current models resist explicit misinformation, we uncover a fundamental vulnerability to sophisticated, hard-to-falsify evidence. To systematically probe this weakness, we introduce MisBelief, a framework that generates misleading evidence via collaborative, multi-round interactions among multi-role LLMs. This process mimics subtle, defeasible reasoning and progressive refinement to create logically persuasive yet factually deceptive claims. Using MisBelief, we generate 4,800 instances across three difficulty levels to evaluate 7 representative LLMs. Results indicate that while models are robust to direct misinformation, they are highly sensitive to this refined evidence: belief scores in falsehoods increase by an average of 93.0\%, fundamentally compromising downstream recommendations. To address this, we propose Deceptive Intent Shielding (DIS), a governance mechanism that provides an early warning signal by inferring the deceptive intent behind evidence. Empirical results demonstrate that DIS consistently mitigates belief shifts and promotes more cautious evidence evaluation.


Key Contributions

  • MisBelief: a multi-role LLM collaborative framework that generates sophisticated, hard-to-falsify deceptive evidence across 8 domains and 3 difficulty levels, yielding 4,800 evaluation instances
  • Systematic evaluation of 7 representative LLMs showing an average 93% increase in belief scores for false claims under refined evidence injection, with reasoning-optimized models being 23.1% more susceptible
  • Deceptive Intent Shielding (DIS): a governance mechanism that employs an analyst agent to infer deceptive intent behind evidence before belief assimilation, acting as a cognitive firewall

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
MisBelief (4,800 custom instances across 8 domains and 3 difficulty levels, seeded from verified news articles)
Applications
llm-assisted decision-makingfact-checking systemsevidence-augmented generation