attack 2025

Evasive Ransomware Attacks Using Low-level Behavioral Adversarial Examples

Manabu Hirano 1, Ryotaro Kobayashi 2

0 citations · CSR

α

Published on arXiv

2508.08656

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Attackers can manipulate ransomware behavioral features (I/O patterns, memory access) via source-code parameter tuning to decrease the detection rate of deep learning-based behavioral ransomware detectors.

Low-level Behavioral Adversarial Examples

Novel technique introduced


Protecting state-of-the-art AI-based cybersecurity defense systems from cyber attacks is crucial. Attackers create adversarial examples by adding small changes (i.e., perturbations) to the attack features to evade or fool the deep learning model. This paper introduces the concept of low-level behavioral adversarial examples and its threat model of evasive ransomware. We formulate the method and the threat model to generate the optimal source code of evasive malware. We then examine the method using the leaked source code of Conti ransomware with the micro-behavior control function. The micro-behavior control function is our test component to simulate changing source code in ransomware; ransomware's behavior can be changed by specifying the number of threads, file encryption ratio, and delay after file encryption at the boot time. We evaluated how much an attacker can control the behavioral features of ransomware using the micro-behavior control function to decrease the detection rate of a ransomware detector.


Key Contributions

  • Introduces the concept of 'low-level behavioral adversarial examples' that constrain adversarial perturbations to physically realizable changes in ransomware source code (thread count, file encryption ratio, encryption delay)
  • Formalizes a threat model for generating optimal evasive malware source code against behavioral-based ML detectors
  • Demonstrates the attack using leaked Conti ransomware source code, showing attackers can meaningfully reduce detection rates by tuning micro-behavior control parameters

🛡️ Threat Analysis

Input Manipulation Attack

The paper proposes 'low-level behavioral adversarial examples' — perturbations applied at the ransomware source-code level (thread count, encryption ratio, delay) that manipulate behavioral feature vectors at inference time to cause an ML-based ransomware detector to misclassify malware as benign. This is an evasion attack directly targeting a deep learning model's input features.


Details

Domains
tabular
Model Types
cnn
Threat Tags
black_boxinference_timetargeteddigital
Datasets
Conti ransomware (leaked source code)
Applications
malware detectionransomware detection