defense arXiv Mar 9, 2026 · 28d ago
Haiyu Deng, Yanna Jiang, Guangsheng Yu et al. · University of Technology Sydney · CSIRO Data61 +1 more
Defends split learning against activation inversion, label clustering, and model extraction via DP and chained watermarking
Model Inversion Attack Model Theft federated-learningvision
Model training is increasingly offered as a service for resource-constrained data owners to build customized models. Split Learning (SL) enables such services by offloading training computation under privacy constraints, and evolves toward serverless and multi-client settings where model segments are distributed across training clients. This cooperative mode assumes partial trust: data owners hide labels and data from trainer clients, while trainer clients produce verifiable training artifacts and ownership proofs. We present CliCooper, a multi-client cooperative SL framework tailored for cooperative model training services in heterogeneous and partially trusted environments, where one client contributes data, while others collectively act as SL trainers. CliCooper bridges the privacy and trust gaps through two new designs. First, differential privacy-based activation protection and secret label obfuscation safeguard data owners' privacy without degrading model performance. Second, a dynamic chained watermarking scheme cryptographically links training stages on model segments across trainers, ensuring verifiable training integrity, robust model provenance, and copyright protection. Experiments show that CliCooper preserves model accuracy while enhancing resilience to privacy and ownership attacks. It reduces the success rate of clustering attacks (which infer label groups from intermediate activation) to 0%, decreases inversion-reconstruction (which recovers training data) similarity from 0.50 to 0.03, and limits model-extraction-based surrogates to about 1% accuracy, comparable to random guessing.
cnn federated University of Technology Sydney · CSIRO Data61 · Edith Cowan University
defense arXiv Mar 13, 2026 · 24d ago
Yanna Jiang, Guangsheng Yu, Qingyuan Yu et al. · University of Technology Sydney · Independent +2 more
Defeats Neural Structural Obfuscation attacks on model watermarks by canonicalizing neural networks to restore watermark verification
Model Theft vision
Neural Structural Obfuscation (NSO) (USENIX Security'23) is a family of ``zero cost'' structure-editing transforms (\texttt{nso\_zero}, \texttt{nso\_clique}, \texttt{nso\_split}) that inject dummy neurons. By combining neuron permutation and parameter scaling, NSO makes a radical modification to the network structure and parameters while strictly preserving functional equivalence, thereby disrupting white-box watermark verification. This capability has been a fundamental challenge to the reliability of existing white-box watermarking schemes. We rethink NSO and, for the first time, fully recover from the damage it has caused. We redefine NSO as a graph-consistent threat model within a \textit{producer--consumer} paradigm. This formulation posits that any obfuscation of a producer node necessitates a compatible layout update in all downstream consumers to maintain structural integrity. Building on these consistency constraints on signal propagation, we present \textsc{Canon}, a recovery framework that probes the attacked model to identify redundancy/dummy channels and then \textit{globally} canonicalizes the network by rewriting \textit{all} downstream consumers by construction, synchronizing layouts across \texttt{fan-out}, \texttt{add}, and \texttt{cat}. Extensive experiments demonstrate that, even under strong composed and extended NSO attacks, \textsc{Canon} achieves \textbf{100\%} recovery success, restoring watermark verifiability while preserving task utility. Our code is available at https://anonymous.4open.science/r/anti-NSO-9874.
cnn University of Technology Sydney · Independent · Tsinghua University +1 more