defense 2026

Zero-Knowledge Federated Learning with Lattice-Based Hybrid Encryption for Quantum-Resilient Medical AI

Edouard Lansiaux 1,2

0 citations

α

Published on arXiv

2603.03398

Model Inversion Attack

OWASP ML Top 10 — ML03

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Achieves 100% rejection of Byzantine norm-violating gradient updates while maintaining 100% model accuracy, versus a drop to 23% accuracy under standard FL with Byzantine clients.

ZKFL-PQ

Novel technique introduced


Federated Learning (FL) enables collaborative training of medical AI models across hospitals without centralizing patient data. However, the exchange of model updates exposes critical vulnerabilities: gradient inversion attacks can reconstruct patient information, Byzantine clients can poison the global model, and the \emph{Harvest Now, Decrypt Later} (HNDL) threat renders today's encrypted traffic vulnerable to future quantum adversaries.We introduce \textbf{ZKFL-PQ} (\emph{Zero-Knowledge Federated Learning, Post-Quantum}), a three-tiered cryptographic protocol that hybridizes (i) ML-KEM (FIPS~203) for quantum-resistant key encapsulation, (ii) lattice-based Zero-Knowledge Proofs for verifiable \emph{norm-constrained} gradient integrity, and (iii) BFV homomorphic encryption for privacy-preserving aggregation. We formalize the security model and prove correctness and zero-knowledge properties under the Module-LWE, Ring-LWE, and SIS assumptions \emph{in the classical random oracle model}. We evaluate ZKFL-PQ on synthetic medical imaging data across 5 federated clients over 10 training rounds. Our protocol achieves \textbf{100\% rejection of norm-violating updates} while maintaining model accuracy at 100\%, compared to a catastrophic drop to 23\% under standard FL. The computational overhead (factor $\sim$20$\times$) is analyzed and shown to be compatible with clinical research workflows operating on daily or weekly training cycles. We emphasize that the current defense guarantees rejection of large-norm malicious updates; robustness against subtle low-norm or directional poisoning remains future work.


Key Contributions

  • Three-tiered cryptographic protocol (ML-KEM + lattice ZKPs + BFV homomorphic encryption) addressing gradient inversion, Byzantine poisoning, and Harvest-Now-Decrypt-Later threats simultaneously
  • Formal security proofs for correctness and zero-knowledge properties under Module-LWE, Ring-LWE, and SIS hardness assumptions
  • Experimental validation on synthetic medical imaging: 100% rejection of norm-violating Byzantine updates vs. catastrophic accuracy drop to 23% under unprotected FL

🛡️ Threat Analysis

Data Poisoning Attack

Byzantine clients submitting adversarial gradient updates to poison the global model are the second primary threat. The lattice-based ZKP norm-constraint component achieves 100% rejection of norm-violating updates from malicious clients, directly defending against training-time data poisoning via Byzantine fault injection in FL.

Model Inversion Attack

Gradient inversion attacks — where an adversary reconstructs patient training data from shared FL model updates — are a primary motivating threat. BFV homomorphic encryption and ZKPs are specifically deployed to prevent the aggregation server or eavesdroppers from reconstructing individual client gradients (and thus private patient data).


Details

Domains
federated-learningvision
Model Types
federated
Threat Tags
training_timewhite_box
Datasets
synthetic medical imaging
Applications
federated learningmedical imagingclinical ai