"Give a Positive Review Only": An Early Investigation Into In-Paper Prompt Injection Attacks and Defenses for AI Reviewers
Qin Zhou 1,2, Zhexin Zhang 3, Zhi Li 1,2, Limin Sun 1,2
Published on arXiv
2511.01287
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Both static and iterative in-paper prompt injection attacks frequently induce perfect evaluation scores from frontier AI reviewers, with the iterative variant remaining robust across diverse reviewer configurations.
In-Paper Prompt Injection (Static + Iterative)
Novel technique introduced
With the rapid advancement of AI models, their deployment across diverse tasks has become increasingly widespread. A notable emerging application is leveraging AI models to assist in reviewing scientific papers. However, recent reports have revealed that some papers contain hidden, injected prompts designed to manipulate AI reviewers into providing overly favorable evaluations. In this work, we present an early systematic investigation into this emerging threat. We propose two classes of attacks: (1) static attack, which employs a fixed injection prompt, and (2) iterative attack, which optimizes the injection prompt against a simulated reviewer model to maximize its effectiveness. Both attacks achieve striking performance, frequently inducing full evaluation scores when targeting frontier AI reviewers. Furthermore, we show that these attacks are robust across various settings. To counter this threat, we explore a simple detection-based defense. While it substantially reduces the attack success rate, we demonstrate that an adaptive attacker can partially circumvent this defense. Our findings underscore the need for greater attention and rigorous safeguards against prompt-injection threats in AI-assisted peer review.
Key Contributions
- First systematic study of in-paper prompt injection attacks targeting LLM-based AI peer review systems
- Two attack classes — static (fixed injection prompt) and iterative (optimized against a simulated surrogate reviewer) — both achieving high success rates inducing full scores from frontier LLMs
- A detection-based defense that substantially reduces attack success, with adaptive attacker analysis showing partial bypass remains possible