defense 2025

CircuitGuard: Mitigating LLM Memorization in RTL Code Generation Against IP Leakage

Nowfel Mashnoor , Mohammad Akyash , Hadi Kamali , Kimia Azar

1 citations · 34 references · ICCD

α

Published on arXiv

2510.19676

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

CircuitGuard achieves up to 80% reduction in semantic similarity to proprietary RTL patterns while maintaining generation quality, with 78–85% cross-domain transfer without retraining.

CircuitGuard

Novel technique introduced


Large Language Models (LLMs) have achieved remarkable success in generative tasks, including register-transfer level (RTL) hardware synthesis. However, their tendency to memorize training data poses critical risks when proprietary or security-sensitive designs are unintentionally exposed during inference. While prior work has examined memorization in natural language, RTL introduces unique challenges: In RTL, structurally different implementations (e.g., behavioral vs. gate-level descriptions) can realize the same hardware, leading to intellectual property (IP) leakage (full or partial) even without verbatim overlap. Conversely, even small syntactic variations (e.g., operator precedence or blocking vs. non-blocking assignments) can drastically alter circuit behavior, making correctness preservation especially challenging. In this work, we systematically study memorization in RTL code generation and propose CircuitGuard, a defense strategy that balances leakage reduction with correctness preservation. CircuitGuard (1) introduces a novel RTL-aware similarity metric that captures both structural and functional equivalence beyond surface-level overlap, and (2) develops an activation-level steering method that identifies and attenuates transformer components most responsible for memorization. Our empirical evaluation demonstrates that CircuitGuard identifies (and isolates) 275 memorization-critical features across layers 18-28 of Llama 3.1-8B model, achieving up to 80% reduction in semantic similarity to proprietary patterns while maintaining generation quality. CircuitGuard further shows 78-85% cross-domain transfer effectiveness, enabling robust memorization mitigation across circuit categories without retraining.


Key Contributions

  • RTL-aware similarity metric that captures both structural and functional equivalence of hardware designs, going beyond surface-level lexical overlap
  • Activation-level steering method that identifies and attenuates 275 memorization-critical transformer features (layers 18–28 of Llama 3.1-8B) to suppress IP leakage
  • Cross-domain transfer effectiveness of 78–85%, enabling memorization mitigation across circuit categories without retraining

🛡️ Threat Analysis

Model Inversion Attack

The core threat is an LLM reproducing memorized proprietary RTL training data (hardware IP) at inference time — this is training data extraction/reconstruction from a model. CircuitGuard defends against this by attenuating transformer components responsible for memorization, directly targeting the data-reconstruction threat model.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timeinference_timeblack_box
Applications
rtl hardware code generationhardware ip protectionhardware design synthesis