attack 2025

IPBA: Imperceptible Perturbation Backdoor Attack in Federated Self-Supervised Learning

Jiayao Wang 1, Yang Song 1, Zhendong Zhao 2, Jiale Zhang 1, Qilin Wu 1, Junwu Zhu 3, Dongfang Zhao 4

0 citations

α

Published on arXiv

2508.08031

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

IPBA significantly outperforms existing backdoor attack methods in federated self-supervised learning while maintaining imperceptibility and robustness against various defense mechanisms.

IPBA

Novel technique introduced


Federated self-supervised learning (FSSL) combines the advantages of decentralized modeling and unlabeled representation learning, serving as a cutting-edge paradigm with strong potential for scalability and privacy preservation. Although FSSL has garnered increasing attention, research indicates that it remains vulnerable to backdoor attacks. Existing methods generally rely on visually obvious triggers, which makes it difficult to meet the requirements for stealth and practicality in real-world deployment. In this paper, we propose an imperceptible and effective backdoor attack method against FSSL, called IPBA. Our empirical study reveals that existing imperceptible triggers face a series of challenges in FSSL, particularly limited transferability, feature entanglement with augmented samples, and out-of-distribution properties. These issues collectively undermine the effectiveness and stealthiness of traditional backdoor attacks in FSSL. To overcome these challenges, IPBA decouples the feature distributions of backdoor and augmented samples, and introduces Sliced-Wasserstein distance to mitigate the out-of-distribution properties of backdoor samples, thereby optimizing the trigger generation process. Our experimental results on several FSSL scenarios and datasets show that IPBA significantly outperforms existing backdoor attack methods in performance and exhibits strong robustness under various defense mechanisms.


Key Contributions

  • Identifies and analyzes three key challenges for imperceptible backdoor attacks in FSSL: limited trigger transferability, feature entanglement with augmented samples, and out-of-distribution properties of backdoor samples
  • Proposes IPBA, which decouples backdoor and augmented sample feature distributions and uses Sliced-Wasserstein distance to align backdoor samples with the in-distribution manifold
  • Demonstrates that IPBA significantly outperforms existing backdoor methods across multiple FSSL scenarios and shows robustness against various defenses

🛡️ Threat Analysis

Model Poisoning

IPBA is a backdoor/trojan attack that embeds hidden targeted malicious behavior activated only by specific imperceptible trigger perturbations — the defining characteristic of ML10. The attack targets the FSSL representation learning process, optimizing trigger generation to survive federated training while evading defenses.


Details

Domains
visionfederated-learning
Model Types
federated
Threat Tags
training_timetargeteddigital
Applications
federated self-supervised learningimage classification