SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning (Full Version)
Phillip Rieger , Alessandro Pegoraro , Kavita Kumari , Tigist Abera , Jonathan Knauer , Ahmad-Reza Sadeghi
Published on arXiv
2501.06650
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
Comprehensive evaluation across various data distributions, client counts, and attack scenarios demonstrates high efficacy in mitigating backdoor attacks while preserving model utility in Split Learning settings.
SafeSplit
Novel technique introduced
Split Learning (SL) is a distributed deep learning approach enabling multiple clients and a server to collaboratively train and infer on a shared deep neural network (DNN) without requiring clients to share their private local data. The DNN is partitioned in SL, with most layers residing on the server and a few initial layers and inputs on the client side. This configuration allows resource-constrained clients to participate in training and inference. However, the distributed architecture exposes SL to backdoor attacks, where malicious clients can manipulate local datasets to alter the DNN's behavior. Existing defenses from other distributed frameworks like Federated Learning are not applicable, and there is a lack of effective backdoor defenses specifically designed for SL. We present SafeSplit, the first defense against client-side backdoor attacks in Split Learning (SL). SafeSplit enables the server to detect and filter out malicious client behavior by employing circular backward analysis after a client's training is completed, iteratively reverting to a trained checkpoint where the model under examination is found to be benign. It uses a two-fold analysis to identify client-induced changes and detect poisoned models. First, a static analysis in the frequency domain measures the differences in the layer's parameters at the server. Second, a dynamic analysis introduces a novel rotational distance metric that assesses the orientation shifts of the server's layer parameters during training. Our comprehensive evaluation across various data distributions, client counts, and attack scenarios demonstrates the high efficacy of this dual analysis in mitigating backdoor attacks while preserving model utility.
Key Contributions
- First defense specifically designed for client-side backdoor attacks in Split Learning (SL), filling a gap left by FL-based defenses that are not applicable to SL's architecture
- Circular backward analysis mechanism that iteratively reverts to a benign checkpoint to isolate and examine client-induced model changes
- Dual detection approach combining static frequency-domain parameter analysis with a novel rotational distance metric measuring orientation shifts in server-side layer parameters during training
🛡️ Threat Analysis
SafeSplit directly defends against backdoor/trojan attacks in Split Learning, where malicious clients manipulate local training data with triggers to embed hidden targeted behavior in the shared DNN. The defense reverses and detects client-induced weight changes using static (frequency domain) and dynamic (rotational distance) analysis.