DSperse: A Framework for Targeted Verification in Zero-Knowledge Machine Learning
Dan Ivanov , Tristan Freiberg , Shirin Shahabi , Jonathan Gold , Haruna Isah
Published on arXiv
2508.06972
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Targeted slice-based verification of high-value subcomputations reduces ZKP overhead compared to full-model circuitization while preserving integrity guarantees for critical inference components.
DSperse
Novel technique introduced
DSperse is a modular framework for distributed machine learning inference with strategic cryptographic verification. Operating within the emerging paradigm of distributed zero-knowledge machine learning, DSperse avoids the high cost and rigidity of full-model circuitization by enabling targeted verification of strategically chosen subcomputations. These verifiable segments, or "slices", may cover part or all of the inference pipeline, with global consistency enforced through audit, replication, or economic incentives. This architecture supports a pragmatic form of trust minimization, localizing zero-knowledge proofs to the components where they provide the greatest value. We evaluate DSperse using multiple proving systems and report empirical results on memory usage, runtime, and circuit behavior under sliced and unsliced configurations. By allowing proof boundaries to align flexibly with the model's logical structure, DSperse supports scalable, targeted verification strategies suited to diverse deployment needs.
Key Contributions
- Slice-based architecture that enables targeted ZK verification of strategically selected ML inference subcomputations rather than full-model circuitization
- Empirical evaluation across multiple proving systems measuring memory, runtime, and circuit behavior under sliced vs. unsliced configurations
- A pragmatic trust minimization model for decentralized ML deployments where full ZKP-based verification is computationally infeasible
🛡️ Threat Analysis
DSperse is a verifiable inference scheme that uses zero-knowledge proofs to prove ML inference outputs were not tampered with on untrusted compute nodes — directly matching the 'verifiable inference schemes (proving outputs weren't tampered with)' use case under ML09.