attack 2025

A Small Leak Sinks All: Exploring the Transferable Vulnerability of Source Code Models

Weiye Li , Wenyi Tang

0 citations · 42 references · arXiv

α

Published on arXiv

2511.08127

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Adversarial code examples generated from traditional SCMs achieve up to 64% attack success rate against LLM4Code, surpassing state-of-the-art methods by over 15 percentage points.

HABITAT

Novel technique introduced


Source Code Model learn the proper embeddings from source codes, demonstrating significant success in various software engineering or security tasks. The recent explosive development of LLM extends the family of SCMs,bringing LLMs for code that revolutionize development workflows. Investigating different kinds of SCM vulnerability is the cornerstone for the security and trustworthiness of AI-powered software ecosystems, however, the fundamental one, transferable vulnerability, remains critically underexplored. Existing studies neither offer practical ways, i.e. require access to the downstream classifier of SCMs, to produce effective adversarial samples for adversarial defense, nor give heed to the widely used LLM4Code in modern software development platforms and cloud-based integrated development environments. Therefore, this work systematically studies the intrinsic vulnerability transferability of both traditional SCMs and LLM4Code, and proposes a victim-agnostic approach to generate practical adversarial samples. We design HABITAT, consisting of a tailored perturbation-inserting mechanism and a hierarchical Reinforcement Learning framework that adaptively selects optimal perturbations without requiring any access to the downstream classifier of SCMs. Furthermore, an intrinsic transferability analysis of SCM vulnerabilities is conducted, revealing the potential vulnerability correlation between traditional SCMs and LLM4Code, together with fundamental factors that govern the success rate of victim-agnostic transfer attacks. These findings of SCM vulnerabilities underscore the critical focal points for developing robust defenses in the future. Experimental evaluation demonstrates that our constructed adversarial examples crafted based on traditional SCMs achieve up to 64% success rates against LLM4Code, surpassing the state-of-the-art by over 15%.


Key Contributions

  • HABITAT: a victim-agnostic adversarial attack using a hierarchical RL framework and tailored perturbation-inserting mechanism requiring no access to downstream classifiers
  • First systematic study of transferable vulnerability between traditional SCMs and LLM4Code, identifying five dominant factors governing cross-architecture transfer success
  • Adversarial examples crafted on traditional SCMs achieve up to 64% attack success rate against LLM4Code, surpassing SOTA by over 15%

🛡️ Threat Analysis

Input Manipulation Attack

Generates adversarial code perturbations (adversarial examples) that cause misclassification at inference time across traditional SCMs and LLM4Code, using hierarchical RL without requiring gradient access or downstream classifier access — a black-box evasion attack on code models.


Details

Domains
nlp
Model Types
llmtransformerrl
Threat Tags
black_boxinference_timetargeted
Applications
source code clone detectionvulnerability detectionsoftware engineering taskscloud-based ides