Jindong Li

h-index: 8 177 citations 23 papers (total)

Papers in Database (1)

defense arXiv Oct 1, 2025 · Oct 2025

Safety Instincts: LLMs Learn to Trust Their Internal Compass for Self-Defense

Guobin Shen, Dongcheng Zhao, Haibo Tong et al. · Beijing Institute of AI Safety and Governance · Beijing Key Laboratory of Safe AI and Superalignment +2 more

Entropy-guided RL alignment trains LLMs to resist 20+ jailbreak methods using internal confidence signals, no external validators needed

Prompt Injection nlp
1 citations PDF