benchmark 2025

Open-weight genome language model safeguards: Assessing robustness via adversarial fine-tuning

James R. M. Black 1, Moritz S. Hanke 1, Aaron Maiwald 2, Tina Hernandez-Boussard 3, Oliver M. Crook 2, Jaspreet Pannu 1

3 citations · 22 references · arXiv

α

Published on arXiv

2511.19299

Transfer Learning Attack

OWASP ML Top 10 — ML07

Key Finding

Fine-tuning Evo 2 on sequences from 110 human-infecting viruses reduced perplexity on unseen viral sequences and enabled SARS-CoV-2 immune escape variant prediction (AUROC 0.6) despite no SARS-CoV-2 exposure, demonstrating data exclusion is insufficient for securing open-weight gLMs.

Adversarial fine-tuning

Novel technique introduced


Novel deep learning architectures are increasingly being applied to biological data, including genetic sequences. These models, referred to as genomic language mod- els (gLMs), have demonstrated impressive predictive and generative capabilities, raising concerns that such models may also enable misuse, for instance via the generation of genomes for human-infecting viruses. These concerns have catalyzed calls for risk mitigation measures. The de facto mitigation of choice is filtering of pretraining data (i.e., removing viral genomic sequences from training datasets) in order to limit gLM performance on virus-related tasks. However, it is not currently known how robust this approach is for securing open-source models that can be fine-tuned using sensitive pathogen data. Here, we evaluate a state-of-the-art gLM, Evo 2, and perform fine-tuning using sequences from 110 harmful human-infecting viruses to assess the rescue of misuse-relevant predictive capabilities. The fine- tuned model exhibited reduced perplexity on unseen viral sequences relative to 1) the pretrained model and 2) a version fine-tuned on bacteriophage sequences. The model fine-tuned on human-infecting viruses also identified immune escape variants from SARS-CoV-2 (achieving an AUROC of 0.6), despite having no expo- sure to SARS-CoV-2 sequences during fine-tuning. This work demonstrates that data exclusion might be circumvented by fine-tuning approaches that can, to some degree, rescue misuse-relevant capabilities of gLMs. We highlight the need for safety frameworks for gLMs and outline further work needed on evaluations and mitigation measures to enable the safe deployment of gLMs.


Key Contributions

  • Empirically shows that data exclusion as a safety measure for open-weight gLMs is not robust against adversarial fine-tuning using sensitive pathogen sequences
  • Demonstrates that Evo 2 fine-tuned on 110 human-infecting viruses generalizes to unseen SARS-CoV-2 sequences, achieving AUROC 0.6 for immune escape variant identification with zero SARS-CoV-2 exposure during fine-tuning
  • Argues for more robust safety frameworks for genomic language models beyond data filtering and outlines evaluation and mitigation directions

🛡️ Threat Analysis

Transfer Learning Attack

The paper's core contribution is demonstrating that a safety measure applied during pre-training (data exclusion of viral sequences) does not survive adversarial fine-tuning — the attacker downloads the open-weight model and exploits the fine-tuning process to restore capabilities that were deliberately removed, which is the canonical ML07 threat scenario.


Details

Domains
nlpgenerative
Model Types
llmtransformer
Threat Tags
white_boxtraining_time
Datasets
Evo 2 7B (base model)sequences from 110 human-infecting virusesbacteriophage sequences (control)SARS-CoV-2 sequences (evaluation)
Applications
genomic language modelsbiosecuritypathogen capability assessment