Latest papers

1 papers
attack arXiv Aug 28, 2025 · Aug 2025

Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review

Matteo Gioele Collu, Umberto Salviati, Roberto Confalonieri et al. · University of Padua · Örebro University +1 more

Embeds invisible adversarial text in paper PDFs to hijack LLM-generated peer reviews across commercial systems

Prompt Injection nlp
PDF