benchmark arXiv Oct 3, 2025 · Oct 2025
Shashank Agnihotri, Jonas Jakubassa, Priyam Dey et al. · University of Mannheim · Max-Planck-Institute for Informatics +2 more
Benchmarks safety pretraining robustness against model abliteration across 20 LLMs, revealing refusal-only training is most fragile to activation-level jailbreaking
Prompt Injection nlp
Open-weight LLMs can be modified at inference time with simple activation edits, which raises a practical question for safety: do common safety interventions like refusal training or metatag training survive such edits? We study model abliteration, a lightweight projection technique designed to remove refusal-sensitive directions, and conduct a controlled evaluation across a granular sequence of Safety Pretraining checkpoints for SmolLM2-1.7B, alongside widely used open baselines. For each of 20 systems, original and abliterated, we issue 100 prompts with balanced harmful and harmless cases, classify responses as **Refusal** or **Non-Refusal** using multiple judges, and validate judge fidelity on a small human-labeled subset. We also probe whether models can identify refusal in their own outputs. Our study produces a checkpoint-level characterization of which data-centric safety components remain robust under abliteration, quantifies how judge selection influences evaluation outcomes, and outlines a practical protocol for integrating inference-time edits into safety assessments. Code: https://github.com/shashankskagnihotri/safety_pretraining.
llm transformer University of Mannheim · Max-Planck-Institute for Informatics · Indian Institute of Science +1 more