Reshmi Ghosh

h-index: 2 36 citations 3 papers (total)

Papers in Database (1)

attack arXiv Oct 16, 2025 · Oct 2025

Are My Optimized Prompts Compromised? Exploring Vulnerabilities of LLM-based Optimizers

Andrew Zhao, Reshmi Ghosh, Vitor Carvalho et al. · Tsinghua University · Microsoft

Discovers LLM prompt optimizers are highly vulnerable to feedback poisoning, introducing a fake reward attack that raises harmful ASR by 0.48

Data Poisoning Attack Prompt Injection nlp
1 citations PDF