Latest papers

1 papers
attack arXiv Feb 19, 2026 · 6w ago

Asking Forever: Universal Activations Behind Turn Amplification in Conversational LLMs

Zachary Coalson, Bo Fang, Sanghyun Hong · Oregon State University · University of Texas at Arlington

Discovers turn amplification as an LLM resource-exhaustion attack, using mechanistic activation analysis to enable persistent fine-tuning and parameter-corruption attack vectors

Model Poisoning Model Denial of Service nlp
PDF