Limits of Convergence-Rate Control for Open-Weight Safety
Domenic Rosati, Xijie Zeng, Hong Huang et al. · Dalhousie University · Vector Institute +1 more
Defends open-weight models against harmful fine-tuning via spectral reparameterization, proving adaptive adversaries can bypass any such defense at linear model-size cost
Open-weight foundation models can be fine-tuned for harmful purposes after release, yet no existing training resistance methods provide theoretical guarantees. Treating these interventions as convergence-rate control problems allows us to connect optimization speed to the spectral structure of model weights. We leverage this insight to develop a novel understanding of convergence rate control through spectral reparameterization and derive an algorithm, SpecDef, that can both provably and empirically slow first- and second-order optimization in non-adversarial settings. In adversarial settings, we establish a fundamental limit on a broad class of convergence rate control methods including our own: an attacker with sufficient knowledge can restore fast convergence at a linear increase in model size. In order to overcome this limitation, future works will need to investigate methods that are not equivalent to controlling convergence rate.