Mayi Xu

Papers in Database (2)

defense arXiv Aug 18, 2025 · Aug 2025

RAJ-PGA: Reasoning-Activated Jailbreak and Principle-Guided Alignment Framework for Large Reasoning Models

Jianhao Chen, Mayi Xu, Haoyang Chen et al. · Wuhan University · Zhongguancun Academy +2 more

Jailbreaks Large Reasoning Models via prompt concretization targeting CoT reasoning, then builds a safety alignment dataset that improves defense by 29.5%

Prompt Injection nlp
PDF Code
defense arXiv Aug 13, 2025 · Aug 2025

NeuronTune: Fine-Grained Neuron Modulation for Balanced Safety-Utility Alignment in LLMs

Birong Pan, Mayi Xu, Qiankun Pi et al. · Wuhan University

Defends LLMs against jailbreaking by fine-grained neuron modulation using attack-aware attribution and meta-learning for safety-utility balance

Prompt Injection nlp
PDF