attack 2025

BadScientist: Can a Research Agent Write Convincing but Unsound Papers that Fool LLM Reviewers?

Fengqing Jiang 1,2, Yichen Feng 1, Yuetai Li 1, Luyao Niu 1, Basel Alomair 2,1, Radha Poovendran 1

0 citations · 29 references · arXiv

α

Published on arXiv

2510.18003

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Fabricated papers achieve high LLM-reviewer acceptance rates, mitigation detection accuracy barely exceeds random chance, and 'concern-acceptance conflict' exposes fundamental integrity failures in current AI-driven review pipelines.

BadScientist

Novel technique introduced


The convergence of LLM-powered research assistants and AI-based peer review systems creates a critical vulnerability: fully automated publication loops where AI-generated research is evaluated by AI reviewers without human oversight. We investigate this through \textbf{BadScientist}, a framework that evaluates whether fabrication-oriented paper generation agents can deceive multi-model LLM review systems. Our generator employs presentation-manipulation strategies requiring no real experiments. We develop a rigorous evaluation framework with formal error guarantees (concentration bounds and calibration analysis), calibrated on real data. Our results reveal systematic vulnerabilities: fabricated papers achieve acceptance rates up to . Critically, we identify \textit{concern-acceptance conflict} -- reviewers frequently flag integrity issues yet assign acceptance-level scores. Our mitigation strategies show only marginal improvements, with detection accuracy barely exceeding random chance. Despite provably sound aggregation mathematics, integrity checking systematically fails, exposing fundamental limitations in current AI-driven review systems and underscoring the urgent need for defense-in-depth safeguards in scientific publishing.


Key Contributions

  • BadScientist framework: a fabrication-oriented LLM paper generation agent using presentation-manipulation strategies requiring no real experiments to fool LLM review systems
  • Identification of 'concern-acceptance conflict' — a systematic failure mode where LLM reviewers flag integrity issues yet assign acceptance-level scores
  • Rigorous evaluation framework with formal error guarantees (concentration bounds, calibration analysis) for assessing LLM review system vulnerabilities; shows mitigation strategies barely exceed random chance

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Applications
ai peer review systemsautomated scientific publishing