Latest papers

2 papers
attack arXiv Aug 28, 2025 · Aug 2025

Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review

Matteo Gioele Collu, Umberto Salviati, Roberto Confalonieri et al. · University of Padua · Örebro University +1 more

Embeds invisible adversarial text in paper PDFs to hijack LLM-generated peer reviews across commercial systems

Prompt Injection nlp
PDF
defense arXiv Jan 12, 2025 · Jan 2025

KeTS: Kernel-based Trust Segmentation against Model Poisoning Attacks

Ankit Gangwal, Mauro Conti, Tommaso Pauselli · IIIT Hyderabad · University of Padua +1 more

Defends federated learning against Byzantine model poisoning by segmenting malicious clients via KDE on historical update evolution

Data Poisoning Attack federated-learningvisiontabular
PDF