Rema Padman

h-index: 3 140 citations 12 papers (total)

Papers in Database (2)

benchmark arXiv Oct 3, 2025 · Oct 2025

Time-To-Inconsistency: A Survival Analysis of Large Language Model Robustness to Adversarial Attacks

Yubo Li, Ramayya Krishnan, Rema Padman · Carnegie Mellon University

Survival analysis framework quantifies when LLMs fail under adversarial multi-turn prompts, enabling early inconsistency detection several turns in advance

Prompt Injection nlp
1 citations PDF
benchmark arXiv Feb 13, 2026 · 7w ago

Consistency of Large Reasoning Models Under Multi-Turn Attacks

Yubo Li, Ramayya Krishnan, Rema Padman · Carnegie Mellon University

Benchmarks nine reasoning LLMs against multi-turn natural-language adversarial attacks, identifying five failure modes and exposing confidence-based defense limitations

Prompt Injection nlp
PDF Code