defense 2025

Nondeterminism-Aware Optimistic Verification for Floating-Point Neural Networks

Jianzhu Yao 1, Hongxu Su 2, Taobo Liao 3, Zerui Cheng 1, Huan Zhang 3, Xuechao Wang 2, Pramod Viswanath 1

2 citations · 61 references · arXiv (Cornell University)

α

Published on arXiv

2510.16028

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Empirical acceptance thresholds are 10²–10³× tighter than theoretical IEEE-754 bounds, and bound-aware adversarial attacks achieve 0% success, with only 0.3% latency overhead on Qwen3-8B inference.

TAO (Tolerance-Aware Optimistic verification)

Novel technique introduced


Neural networks increasingly run on hardware outside the user's control (cloud GPUs, inference marketplaces). Yet ML-as-a-Service reveals little about what actually ran or whether returned outputs faithfully reflect the intended inputs. Users lack recourse against service downgrades (model swaps, quantization, graph rewrites, or discrepancies like altered ad embeddings). Verifying outputs is hard because floating-point(FP) execution on heterogeneous accelerators is inherently nondeterministic. Existing approaches are either impractical for real FP neural networks or reintroduce vendor trust. We present NAO: a Nondeterministic tolerance Aware Optimistic verification protocol that accepts outputs within principled operator-level acceptance regions rather than requiring bitwise equality. NAO combines two error models: (i) sound per-operator IEEE-754 worst-case bounds and (ii) tight empirical percentile profiles calibrated across hardware. Discrepancies trigger a Merkle-anchored, threshold-guided dispute game that recursively partitions the computation graph until one operator remains, where adjudication reduces to a lightweight theoretical-bound check or a small honest-majority vote against empirical thresholds. Unchallenged results finalize after a challenge window, without requiring trusted hardware or deterministic kernels. We implement NAO as a PyTorch-compatible runtime and a contract layer currently deployed on Ethereum Holesky testnet. The runtime instruments graphs, computes per-operator bounds, and runs unmodified vendor kernels in FP32 with negligible overhead (0.3% on Qwen3-8B). Across CNNs, Transformers and diffusion models on A100, H100, RTX6000, RTX4090, empirical thresholds are $10^2-10^3$ times tighter than theoretical bounds, and bound-aware adversarial attacks achieve 0% success. NAO reconciles scalability with verifiability for real-world heterogeneous ML compute.


Key Contributions

  • Operator-level acceptance regions combining sound IEEE-754 worst-case bounds with tight empirical percentile profiles calibrated across heterogeneous GPUs, enabling nondeterminism-tolerant output verification
  • Merkle-anchored, threshold-guided interactive dispute game that recursively bisects the computation graph until a single operator remains for lightweight adjudication — no trusted hardware or deterministic kernels required
  • PyTorch-compatible runtime with negligible overhead (0.3% on Qwen3-8B) and an Ethereum smart-contract coordinator; empirical thresholds are 10²–10³× tighter than theoretical bounds and bound-aware adversarial attacks achieve 0% success

🛡️ Threat Analysis

Output Integrity Attack

The paper's primary contribution is a verifiable inference scheme — proving that model outputs faithfully reflect the intended model and inputs, and detecting tampering (model swaps, unauthorized quantization, graph rewrites, altered embeddings) by cloud/MLaaS providers. ML09 explicitly includes 'verifiable inference schemes (proving outputs weren't tampered with)'. The adversarial analysis (bound-aware attacks achieving 0% success) further targets output integrity against adversaries who try to deviate while evading detection.


Details

Domains
visionnlpgenerative
Model Types
cnntransformerllmdiffusion
Threat Tags
black_boxinference_time
Datasets
Qwen3-8BCNNs on A100/H100/RTX6000/RTX4090Transformer modelsdiffusion models
Applications
ml-as-a-servicecloud inferenceinference marketplacesllm serving