defense arXiv Jan 8, 2026 · 12w ago
Polina Dolgova, Sebastian U. Stich · CISPA Helmholtz Center for Information Security · Universität des Saarlandes
Defends against membership inference on forgotten data via block-wise noise injection that preserves certified (ε,δ) unlearning guarantees with far less accuracy loss
Membership Inference Attack vision
Certified unlearning based on differential privacy offers strong guarantees but remains largely impractical: the noisy fine-tuning approaches proposed so far achieve these guarantees but severely reduce model accuracy. We propose sequential noise scheduling, which distributes the noise budget across orthogonal subspaces of the parameter space, rather than injecting it all at once. This simple modification mitigates the destructive effect of noise while preserving the original certification guarantees. We extend the analysis of noisy fine-tuning to the subspace setting, proving that the same $(\varepsilon,δ)$ privacy budget is retained. Empirical results on image classification benchmarks show that our approach substantially improves accuracy after unlearning while remaining robust to membership inference attacks. These results show that certified unlearning can achieve both rigorous guarantees and practical utility.
cnn CISPA Helmholtz Center for Information Security · Universität des Saarlandes