defense 2025

Demystifying the Role of Rule-based Detection in AI Systems for Windows Malware Detection

Andrea Ponte 1, Luca Demetrio 1, Luca Oneto 1, Ivan Tesfai Ogbu 2, Battista Biggio 1,3, Fabio Roli 1,3

0 citations

α

Published on arXiv

2508.09652

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

An AI system trained solely on YARA-undetected samples matches or exceeds models trained on all data at 1% FPR and exhibits greater robustness to adversarial EXEmples, at the cost of a fixed false-positive floor from non-adaptive YARA rules.

YARA-filtered AI System training

Novel technique introduced


Malware detection increasingly relies on AI systems that integrate signature-based detection with machine learning. However, these components are typically developed and combined in isolation, missing opportunities to reduce data complexity and strengthen defenses against adversarial EXEmples, carefully crafted programs designed to evade detection. Hence, in this work we investigate the influence that signature-based detection exerts on model training, when they are included inside the training pipeline. Specifically, we compare models trained on a comprehensive dataset with an AI system whose machine learning component is trained solely on samples not already flagged by signatures. Our results demonstrate improved robustness to both adversarial EXEmples and temporal data drift, although this comes at the cost of a fixed lower bound on false positives, driven by suboptimal rule selection. We conclude by discussing these limitations and outlining how future research could extend AI-based malware detection to include dynamic analysis, thereby further enhancing system resilience.


Key Contributions

  • First empirical analysis of how YARA signature-based pre-filtering in the training pipeline affects ML model robustness and data complexity for Windows malware detection
  • Demonstrates that training only on YARA-undetected samples improves robustness to adversarial EXEmples and temporal data drift while matching overall detection performance at low FPR (1%)
  • Identifies a key trade-off: YARA-filtered AI systems introduce a fixed false-positive floor due to non-adaptive rule selection, highlighting a limitation for future work

🛡️ Threat Analysis

Input Manipulation Attack

The paper proposes and evaluates a defense (YARA-filtered training pipeline) against adversarial EXEmples — Windows PE executables carefully manipulated to evade ML-based malware detection at inference time. Adversarial evasion of ML classifiers is the central threat model driving the research.


Details

Domains
tabular
Model Types
traditional_ml
Threat Tags
inference_timedigitalblack_box
Datasets
Windows PE malware dataset (custom)
Applications
windows malware detectionantivirus software