ProtoGuard-SL: Prototype Consistency Based Backdoor Defense for Vertical Split Learning
Yuhan Shui, Ruobin Jin, Zhihao Dou et al. · Wenzhou-Kean University · Case Western Reserve University
Server-side defense detecting backdoor-poisoned embeddings in vertical split learning using class-conditional prototype consistency filtering
Vertical split learning (SL) enables collaborative model training across parties holding complementary features without sharing raw data, but recent work has shown that it is highly vulnerable to poisoning-based backdoor attacks operating on intermediate embeddings. By compromising malicious clients, adversaries can inject stealthy triggers that manipulate the server-side model while remaining difficult to detect, and existing defenses provide limited robustness against adaptive attacks. In this paper, we propose ProtoGuard-SL, a server-side defense that improves the robustness of split learning by exploiting class-conditional representation consistency in the embedding space. Our approach is motivated by the observation that benign embeddings within the same class exhibit stable semantic alignment, whereas poisoned embeddings inevitably disrupt this structure. ProtoGuard-SL adopts a two-stage framework that constructs robust class prototypes and transforms embeddings into a prototype-consistency representation, followed by a class-conditional, distribution-free conformal filtering strategy to identify and remove anomalous embeddings. Extensive experiments are conducted on three datasets, CIFAR-10, SVHN, and Bank Marketing, under three different attack settings demonstrate that our method achieves state-of-the-art performance.