defense 2026

FUPareto: Bridging the Forgetting-Utility Gap in Federated Unlearning via Pareto Augmented Optimization

Zeyan Wang , Zhengmao Liu , Yongxin Cai , Chi Li , Xiaoying Tang , Jingchao Chen , Zibin Pan , Jing Qiu

0 citations · arXiv (Cornell University)

α

Published on arXiv

2602.01852

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

FUPareto consistently outperforms state-of-the-art federated unlearning methods in both unlearning efficacy and retained utility across diverse scenarios, while reducing MIA susceptibility introduced by prior unlearning objectives.

FUPareto

Novel technique introduced


Federated Unlearning (FU) aims to efficiently remove the influence of specific client data from a federated model while preserving utility for the remaining clients. However, three key challenges remain: (1) existing unlearning objectives often compromise model utility or increase vulnerability to Membership Inference Attacks (MIA); (2) there is a persistent conflict between forgetting and utility, where further unlearning inevitably harms retained performance; and (3) support for concurrent multi-client unlearning is poor, as gradient conflicts among clients degrade the quality of forgetting. To address these issues, we propose FUPareto, an efficient unlearning framework via Pareto-augmented optimization. We first introduce the Minimum Boundary Shift (MBS) Loss, which enforces unlearning by suppressing the target class logit below the highest non-target class logit; this can improve the unlearning efficiency and mitigate MIA risks. During the unlearning process, FUPareto performs Pareto improvement steps to preserve model utility and executes Pareto expansion to guarantee forgetting. Specifically, during Pareto expansion, the framework integrates a Null-Space Projected Multiple Gradient Descent Algorithm (MGDA) to decouple gradient conflicts. This enables effective, fair, and concurrent unlearning for multiple clients while minimizing utility degradation. Extensive experiments across diverse scenarios demonstrate that FUPareto consistently outperforms state-of-the-art FU methods in both unlearning efficacy and retained utility.


Key Contributions

  • Minimum Boundary Shift (MBS) Loss that enforces forgetting by suppressing the target class logit while specifically mitigating MIA vulnerability compared to gradient ascent and entropy-maximization objectives
  • Pareto-augmented optimization framework combining Pareto improvement (utility preservation) and Pareto expansion (guaranteed forgetting) to escape the forgetting-utility trade-off frontier
  • Null-Space Projected MGDA algorithm to decouple gradient conflicts across multiple concurrently unlearning clients in federated settings

🛡️ Threat Analysis

Membership Inference Attack

MIA resistance is an explicit, primary design objective: the Minimum Boundary Shift (MBS) Loss is specifically introduced to mitigate membership inference attack risks that are worsened by existing unlearning objectives, and the paper evaluates FUPareto against MIA throughout its experiments.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timeinference_time
Applications
federated learningprivacy-compliant model deployment