Haz Sameen Shahgir

h-index: 5 69 citations 16 papers (total)

Papers in Database (1)

attack arXiv Feb 9, 2026 · 8w ago

Is Reasoning Capability Enough for Safety in Long-Context Language Models?

Yu Fu, Haz Sameen Shahgir, Huanli Gong et al. · University of California · International Computer Science University of California +1 more

Attacks LLMs by decomposing harmful queries into scattered long-context fragments that induce unsafe synthesis, bypassing safety alignment

Prompt Injection nlp
PDF