How Vulnerable Are Edge LLMs?
Ao Ding 1, Hongzong Li 2, Zi Liang 3, Zhanpeng Shi 4, Shuxin Zhuang 5,6, Shiqin Tang 6, Rong Feng 5,6, Ping Lu 5
1 China University of Geosciences
2 Hong Kong University of Science and Technology
3 Hong Kong Polytechnic University
Published on arXiv
2603.23822
Model Theft
OWASP ML Top 10 — ML05
Model Theft
OWASP LLM Top 10 — LLM10
Key Finding
CLIQ consistently outperforms original queries across BERTScore, BLEU, and ROUGE metrics on quantized Qwen models (INT8/INT4), enabling efficient behavioral extraction under realistic query budgets despite quantization noise
CLIQ
Novel technique introduced
Large language models (LLMs) are increasingly deployed on edge devices under strict computation and quantization constraints, yet their security implications remain unclear. We study query-based knowledge extraction from quantized edge-deployed LLMs under realistic query budgets and show that, although quantization introduces noise, it does not remove the underlying semantic knowledge, allowing substantial behavioral recovery through carefully designed queries. To systematically analyze this risk, we propose \textbf{CLIQ} (\textbf{Cl}ustered \textbf{I}nstruction \textbf{Q}uerying), a structured query construction framework that improves semantic coverage while reducing redundancy. Experiments on quantized Qwen models (INT8/INT4) demonstrate that CLIQ consistently outperforms original queries across BERTScore, BLEU, and ROUGE, enabling more efficient extraction under limited budgets. These results indicate that quantization alone does not provide effective protection against query-based extraction, highlighting a previously underexplored security risk in edge-deployed LLMs.
Key Contributions
- First systematic study of extraction risks in quantized edge-deployed LLMs, showing quantization alone does not prevent behavioral theft
- CLIQ framework using semantic clustering to construct representative queries that maximize coverage and minimize redundancy under limited query budgets
- Empirical demonstration that structured query design is the key factor governing extractability, consistently outperforming naive querying across BERTScore, BLEU, and ROUGE metrics on INT8/INT4 quantized models
🛡️ Threat Analysis
Paper proposes CLIQ, a query-based attack framework to extract behavioral knowledge from quantized edge-deployed LLMs by constructing structured queries and training a student model to replicate the target model's responses. This is model theft through black-box query access, directly stealing the model's learned functionality and intellectual property.