attack 2025

HAMLOCK: HArdware-Model LOgically Combined attacK

Sanskar Amgain 1, Daniel Lobo 2, Atri Chatterjee 2, Swarup Bhunia 2, Fnu Suya 1

0 citations · 76 references · arXiv

α

Published on arXiv

2510.19145

Model Poisoning

OWASP ML Top 10 — ML10

AI Supply Chain Attacks

OWASP ML Top 10 — ML06

Key Finding

Achieves near-perfect attack success rate with negligible clean accuracy drop while bypassing all evaluated state-of-the-art model-level backdoor defenses without adaptive optimization, and hardware Trojan overhead is masked by process/environmental noise at 0.01%.

HAMLOCK

Novel technique introduced


The growing use of third-party hardware accelerators (e.g., FPGAs, ASICs) for deep neural networks (DNNs) introduces new security vulnerabilities. Conventional model-level backdoor attacks, which only poison a model's weights to misclassify inputs with a specific trigger, are often detectable because the entire attack logic is embedded within the model (i.e., software), creating a traceable layer-by-layer activation path. This paper introduces the HArdware-Model Logically Combined Attack (HAMLOCK), a far stealthier threat that distributes the attack logic across the hardware-software boundary. The software (model) is now only minimally altered by tuning the activations of few neurons to produce uniquely high activation values when a trigger is present. A malicious hardware Trojan detects those unique activations by monitoring the corresponding neurons' most significant bit or the 8-bit exponents and triggers another hardware Trojan to directly manipulate the final output logits for misclassification. This decoupled design is highly stealthy, as the model itself contains no complete backdoor activation path as in conventional attacks and hence, appears fully benign. Empirically, across benchmarks like MNIST, CIFAR10, GTSRB, and ImageNet, HAMLOCK achieves a near-perfect attack success rate with a negligible clean accuracy drop. More importantly, HAMLOCK circumvents the state-of-the-art model-level defenses without any adaptive optimization. The hardware Trojan is also undetectable, incurring area and power overheads as low as 0.01%, which is easily masked by process and environmental noise. Our findings expose a critical vulnerability at the hardware-software interface, demanding new cross-layer defenses against this emerging threat.


Key Contributions

  • Novel hardware-software split backdoor attack (HAMLOCK) that distributes attack logic across model weights and a hardware Trojan, making the model itself appear fully benign to all software-level defenses
  • Minimal model perturbation technique that tunes a small number of neurons to emit uniquely high activation values (detectable via MSB or 8-bit exponent) only when a trigger is present
  • Hardware Trojan design that monitors neuron activations and directly manipulates final output logits for misclassification, incurring as little as 0.01% area and power overhead

🛡️ Threat Analysis

AI Supply Chain Attacks

The attack requires a malicious third-party hardware accelerator (FPGA/ASIC) containing a hardware Trojan — an explicit attack on ML hardware supply chain infrastructure. The hardware component is not merely motivational; it is half the attack mechanism, and the threat model assumes a malicious hardware vendor supplying DNN inference hardware.

Model Poisoning

HAMLOCK is fundamentally a backdoor/trojan attack: the model is minimally tuned to produce unique activations on a trigger, causing targeted misclassification. The core contribution is the stealthy backdoor mechanism with trigger-activated behavior, directly placing it in the Model Poisoning/Backdoors category.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxtraining_timeinference_timetargeteddigital
Datasets
MNISTCIFAR-10GTSRBImageNet
Applications
image classificationdnn inference on hardware accelerators