Dong Li

h-index: 1 2 citations 3 papers (total)

Papers in Database (1)

attack arXiv Jan 6, 2026 · Jan 2026

Adversarial Contrastive Learning for LLM Quantization Attacks

Dinghong Song, Zhiwei Xu, Hai Wan et al. · University of California · Tsinghua University

Gradient-based contrastive learning attack embeds LLM backdoors that stay dormant in full precision but activate on quantization

Model Poisoning nlp
1 citations PDF