defense 2026

ProtoGuard-SL: Prototype Consistency Based Backdoor Defense for Vertical Split Learning

Yuhan Shui 1, Ruobin Jin 1, Zhihao Dou 2, Zhiqiang Gao 1,1

0 citations

α

Published on arXiv

2604.03595

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Achieves state-of-the-art performance defending against three different backdoor attack settings (VILLAIN, SplitNN, and others) on three datasets

ProtoGuard-SL

Novel technique introduced


Vertical split learning (SL) enables collaborative model training across parties holding complementary features without sharing raw data, but recent work has shown that it is highly vulnerable to poisoning-based backdoor attacks operating on intermediate embeddings. By compromising malicious clients, adversaries can inject stealthy triggers that manipulate the server-side model while remaining difficult to detect, and existing defenses provide limited robustness against adaptive attacks. In this paper, we propose ProtoGuard-SL, a server-side defense that improves the robustness of split learning by exploiting class-conditional representation consistency in the embedding space. Our approach is motivated by the observation that benign embeddings within the same class exhibit stable semantic alignment, whereas poisoned embeddings inevitably disrupt this structure. ProtoGuard-SL adopts a two-stage framework that constructs robust class prototypes and transforms embeddings into a prototype-consistency representation, followed by a class-conditional, distribution-free conformal filtering strategy to identify and remove anomalous embeddings. Extensive experiments are conducted on three datasets, CIFAR-10, SVHN, and Bank Marketing, under three different attack settings demonstrate that our method achieves state-of-the-art performance.


Key Contributions

  • Exploits class-conditional representation consistency to detect poisoned embeddings in vertical split learning
  • Two-stage framework: constructs robust class prototypes and transforms embeddings into prototype-consistency representations
  • Class-conditional, distribution-free conformal filtering strategy to identify and remove anomalous embeddings

🛡️ Threat Analysis

Model Poisoning

Paper addresses backdoor attacks in split learning where adversaries inject triggers into intermediate embeddings to manipulate server-side model behavior—this is backdoor/trojan defense.


Details

Domains
visiontabularfederated-learning
Model Types
federatedcnn
Threat Tags
training_timetargeted
Datasets
CIFAR-10SVHNBank Marketing
Applications
vertical split learningcollaborative model trainingmedical imagingclinical decision support