LLA: Enhancing Security and Privacy for Generative Models with Logic-Locked Accelerators
You Li , Guannan Zhao , Yuhao Ju , Yunqi He , Jie Gu , Hai Zhou
Published on arXiv
2512.22307
Model Theft
OWASP ML Top 10 — ML05
AI Supply Chain Attacks
OWASP ML Top 10 — ML06
Key Finding
LLA withstands oracle-guided key optimization attacks while incurring less than 0.1% computational overhead for 7,168 key bits.
LLA (Logic-Locked Accelerators)
Novel technique introduced
We introduce LLA, an effective intellectual property (IP) protection scheme for generative AI models. LLA leverages the synergy between hardware and software to defend against various supply chain threats, including model theft, model corruption, and information leakage. On the software side, it embeds key bits into neurons that can trigger outliers to degrade performance and applies invariance transformations to obscure the key values. On the hardware side, it integrates a lightweight locking module into the AI accelerator while maintaining compatibility with various dataflow patterns and toolchains. An accelerator with a pre-stored secret key acts as a license to access the model services provided by the IP owner. The evaluation results show that LLA can withstand a broad range of oracle-guided key optimization attacks, while incurring a minimal computational overhead of less than 0.1% for 7,168 key bits.
Key Contributions
- Hardware-software co-design where a logic-locked AI accelerator with a pre-stored secret key acts as a hardware license, preventing unauthorized model access without the correct key.
- Software-side key embedding that triggers performance-degrading outliers in neurons and applies invariance transformations to obscure key values, resisting reverse-engineering.
- Demonstrated resistance to oracle-guided key optimization attacks with less than 0.1% computational overhead for 7,168 key bits.
🛡️ Threat Analysis
Core contribution is preventing unauthorized use and theft of generative AI model IP: a hardware accelerator with a pre-stored secret key acts as a cryptographic license, and software-side key embedding with invariance transformations resists key extraction attacks — this is directly model theft defense.
Paper explicitly frames its threat model as supply chain attacks (model theft, model corruption, and information leakage during distribution), and the hardware-software locking mechanism is designed to protect the model as it traverses an untrusted supply chain from IP owner to end user.