defense 2025

SLIP-SEC: Formalizing Secure Protocols for Model IP Protection

Racchit Jain , Satya Lokam , Yehonathan Refael , Adam Hakim , Lev Greenberg , Jay Tenenbaum

0 citations · 17 references · arXiv

α

Published on arXiv

2510.24999

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

Proves that additive weight matrix decomposition with masking achieves information-theoretic security against honest-but-curious adversaries and negligible soundness error against malicious adversaries

SLIP (Secure LLM Inference Protocol)

Novel technique introduced


Large Language Models (LLMs) represent valuable intellectual property (IP), reflecting significant investments in training data, compute, and expertise. Deploying these models on partially trusted or insecure devices introduces substantial risk of model theft, making it essential to design inference protocols with provable security guarantees. We present the formal framework and security foundations of SLIP, a hybrid inference protocol that splits model computation between a trusted and an untrusted resource. We define and analyze the key notions of model decomposition and hybrid inference protocols, and introduce formal properties including safety, correctness, efficiency, and t-soundness. We construct secure inference protocols based on additive decompositions of weight matrices, combined with masking and probabilistic verification techniques. We prove that these protocols achieve information-theoretic security against honest-but-curious adversaries, and provide robustness against malicious adversaries with negligible soundness error. This paper focuses on the theoretical underpinnings of SLIP: precise definitions, formal protocols, and proofs of security. Empirical validation and decomposition heuristics appear in the companion SLIP paper. Together, the two works provide a complete account of securing LLM IP via hybrid inference, bridging both practice and theory.


Key Contributions

  • Formal definitions of hybrid inference protocols for LLMs with properties including safety, correctness, efficiency, and t-soundness
  • Secure inference protocol based on additive decomposition of weight matrices combined with masking techniques, achieving information-theoretic security against honest-but-curious adversaries
  • Formal proof of robustness against malicious adversaries with negligible soundness error via probabilistic verification

🛡️ Threat Analysis

Model Theft

SLIP is explicitly designed to prevent theft of LLM model weights (IP) when inference is run on a partially untrusted resource — the core threat model is an adversary extracting the weight matrices. Defends against model extraction via additive decomposition, masking, and probabilistic verification with formal security proofs.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
grey_boxinference_time
Applications
llm inference on untrusted devicesmodel ip protection