Sanghyun Hong

h-index: 2 11 citations 12 papers (total)

Papers in Database (2)

defense arXiv Feb 19, 2026 · 6w ago

Fail-Closed Alignment for Large Language Models

Zachary Coalson, Beth Sohler, Aiden Gabriel et al. · Oregon State University

Defends LLMs against jailbreaks by training multiple independent refusal pathways that attackers cannot simultaneously suppress

Prompt Injection nlp
PDF
attack arXiv Feb 19, 2026 · 6w ago

Discovering Universal Activation Directions for PII Leakage in Language Models

Leo Marchyok, Zachary Coalson, Sungho Keum et al. · Oregon State University · Korea Advanced Institute of Science & Technology

Discovers universal activation directions in LLM residual streams that reliably amplify PII leakage beyond existing prompt-based extraction attacks

Model Inversion Attack Sensitive Information Disclosure nlp
PDF