DyLoC: A Dual-Layer Architecture for Secure and Trainable Quantum Machine Learning Under Polynomial-DLA constraint
Chenyi Zhang , Tao Shang , Chao Guo , Ruohan He
Published on arXiv
2512.00699
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
DyLoC increases gradient reconstruction error by 13 orders of magnitude and blocks snapshot inversion (reconstruction MSE > 2.0) while maintaining baseline convergence with a final training loss of 0.186
DyLoC
Novel technique introduced
Variational quantum circuits face a critical trade-off between privacy and trainability. High expressivity required for robust privacy induces exponentially large dynamical Lie algebras. This structure inevitably leads to barren plateaus. Conversely, trainable models restricted to polynomial-sized algebras remain transparent to algebraic attacks. To resolve this impasse, DyLoC is proposed. This dual-layer architecture employs an orthogonal decoupling strategy. Trainability is anchored to a polynomial-DLA ansatz while privacy is externalized to the input and output interfaces. Specifically, Truncated Chebyshev Graph Encoding (TCGE) is employed to thwart snapshot inversion. Dynamic Local Scrambling (DLS) is utilized to obfuscate gradients. Experiments demonstrate that DyLoC maintains baseline-level convergence with a final loss of 0.186. It outperforms the baseline by increasing the gradient reconstruction error by 13 orders of magnitude. Furthermore, snapshot inversion attacks are blocked when the reconstruction mean squared error exceeds 2.0. These results confirm that DyLoC effectively establishes a verifiable pathway for secure and trainable quantum machine learning.
Key Contributions
- Orthogonal decoupling strategy that separates privacy protection (input/output interfaces) from trainability (polynomial-DLA ansatz), breaking the privacy-trainability trade-off in quantum ML
- Truncated Chebyshev Graph Encoding (TCGE) that violates the separability assumption required by snapshot inversion algorithms while maintaining constant circuit depth
- Dynamic Local Scrambling (DLS) that obfuscates the linear gradient-snapshot relationship, increasing gradient reconstruction error by 13 orders of magnitude over baseline
🛡️ Threat Analysis
The core threat model is an adversary reconstructing input training data from publicly observed training gradients (gradient leakage) and intermediate quantum state snapshots (snapshot inversion). Both the Weak Privacy Breach (state reconstruction from gradients) and Strong Privacy Breach (input data inversion from snapshots) are model inversion / gradient reconstruction attacks. DyLoC defends against these via TCGE and DLS, evaluated by gradient reconstruction error and snapshot inversion MSE — exactly the adversarial data-reconstruction threat ML03 covers.