attack 2026

DECEIVE-AFC: Adversarial Claim Attacks against Search-Enabled LLM-based Fact-Checking Systems

Haoran Ou 1, Kangjie Chen 1, Gelei Deng 1, Hangcheng Liu 1, Jie Zhang 2, Tianwei Zhang 1, Kwok-Yan Lam 1

0 citations · 33 references · arXiv

α

Published on arXiv

2602.02569

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Reduces LLM-based fact-checking accuracy from 78.7% to 53.7% with strong cross-system transferability against real-world systems

DECEIVE-AFC

Novel technique introduced


Fact-checking systems with search-enabled large language models (LLMs) have shown strong potential for verifying claims by dynamically retrieving external evidence. However, the robustness of such systems against adversarial attack remains insufficiently understood. In this work, we study adversarial claim attacks against search-enabled LLM-based fact-checking systems under a realistic input-only threat model. We propose DECEIVE-AFC, an agent-based adversarial attack framework that integrates novel claim-level attack strategies and adversarial claim validity evaluation principles. DECEIVE-AFC systematically explores adversarial attack trajectories that disrupt search behavior, evidence retrieval, and LLM-based reasoning without relying on access to evidence sources or model internals. Extensive evaluations on benchmark datasets and real-world systems demonstrate that our attacks substantially degrade verification performance, reducing accuracy from 78.7% to 53.7%, and significantly outperform existing claim-based attack baselines with strong cross-system transferability.


Key Contributions

  • DECEIVE-AFC: an agent-based adversarial attack framework with novel claim-level strategies that disrupt search queries, evidence retrieval, and LLM reasoning in fact-checking pipelines
  • Adversarial claim validity evaluation principles enabling systematic exploration of effective attack trajectories under a realistic input-only threat model
  • Empirical demonstration of strong cross-system transferability, reducing fact-checking accuracy from 78.7% to 53.7% while outperforming existing baselines

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Applications
automated fact-checkingsearch-augmented llm systems