defense 2026

Semantic Consensus Decoding: Backdoor Defense for Verilog Code Generation

Guang Yang 1,2, Xing Hu 1, Xiang Chen 3, Xin Xia 1

0 citations · 52 references · arXiv (Cornell University)

α

Published on arXiv

2602.04195

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

SCD reduces average backdoor attack success rate from 89% to under 3% across three representative attacks with negligible impact on Verilog generation quality.

Semantic Consensus Decoding (SCD)

Novel technique introduced


Large language models (LLMs) for Verilog code generation are increasingly adopted in hardware design, yet remain vulnerable to backdoor attacks where adversaries inject malicious triggers during training to induce vulnerable hardware designs. Unlike patchable software vulnerabilities, hardware trojans become irreversible once fabricated, making remediation extremely costly or impossible. Existing active defenses require access to training data, impractical for third-party LLM users, while passive defenses struggle against semantically stealthy triggers that naturally blend into design specifications. In this paper, we hypothesize that under the requirements of both effectiveness and stealthiness, attackers are strongly biased toward embedding triggers in non-functional requirements (e.g., style modifiers, quality descriptors) rather than functional specifications that determine hardware behavior. Exploiting this insight, we propose Semantic Consensus Decoding (SCD), an inference-time passive defense with two key components: (1) functional requirement extraction that identifies essential requirements from user specifications, and (2) consensus decoding that adaptively fuses output distributions based on full user specifications and extracted functional requirements. When these distributions diverge significantly, SCD automatically suppresses suspicious components. Extensive experiments with three representative backdoor attacks demonstrate that SCD reduces average attack success rate from 89% to under 3% with negligible impact on generation quality.


Key Contributions

  • Hypothesis that backdoor attackers targeting LLMs are strongly biased toward embedding triggers in non-functional requirements (style, quality descriptors) rather than functional specifications, due to stealthiness constraints
  • Semantic Consensus Decoding (SCD): a training-data-free, inference-time passive defense combining functional requirement extraction with adaptive distribution fusion to suppress trigger-activated outputs
  • Empirical demonstration that SCD reduces average backdoor attack success rate from 89% to under 3% across three representative attacks with negligible code generation quality degradation

🛡️ Threat Analysis

Model Poisoning

The paper directly defends against backdoor attacks where adversaries inject malicious triggers during LLM training to induce vulnerable Verilog hardware designs — a classic backdoor/trojan threat. SCD is specifically designed to detect and suppress these trigger-activated behaviors at inference time.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timetargetedblack_box
Applications
hardware designverilog code generation