benchmark 2025

Best Practices for Biorisk Evaluations on Open-Weight Bio-Foundation Models

Boyi Wei 1,2, Zora Che 1,3, Nathaniel Li 1, Udari Madhushani Sehwag 1, Jasper Götting 4, Samira Nedungadi 4, Julian Michael 1, Summer Yue 1, Dan Hendrycks 5, Peter Henderson 2, Zifan Wang 1, Seth Donoughe 4, Mantas Mazeika 5

0 citations · 59 references · arXiv

α

Published on arXiv

2510.27629

Transfer Learning Attack

OWASP ML Top 10 — ML07

Key Finding

Data filtering during pre-training is insufficient: excluded biohazardous knowledge is recoverable via fine-tuning and dual-use signals reside in pretrained representations accessible through simple linear probing.

BioRiskEval

Novel technique introduced


Open-weight bio-foundation models present a dual-use dilemma. While holding great promise for accelerating scientific research and drug development, they could also enable bad actors to develop more deadly bioweapons. To mitigate the risk posed by these models, current approaches focus on filtering biohazardous data during pre-training. However, the effectiveness of such an approach remains unclear, particularly against determined actors who might fine-tune these models for malicious use. To address this gap, we propose BioRiskEval, a framework to evaluate the robustness of procedures that are intended to reduce the dual-use capabilities of bio-foundation models. BioRiskEval assesses models' virus understanding through three lenses, including sequence modeling, mutational effects prediction, and virulence prediction. Our results show that current filtering practices may not be particularly effective: Excluded knowledge can be rapidly recovered in some cases via fine-tuning, and exhibits broader generalizability in sequence modeling. Furthermore, dual-use signals may already reside in the pretrained representations, and can be elicited via simple linear probing. These findings highlight the challenges of data filtering as a standalone procedure, underscoring the need for further research into robust safety and security strategies for open-weight bio-foundation models.


Key Contributions

  • BioRiskEval: an evaluation framework assessing robustness of safety filtering in bio-foundation models across three threat lenses — sequence modeling, mutational effects prediction, and virulence prediction
  • Empirical demonstration that fine-tuning rapidly recovers excluded biohazardous knowledge, undermining data filtering as a standalone safety intervention
  • Shows dual-use signals persist in pretrained representations and are easily elicited via linear probing, independent of fine-tuning

🛡️ Threat Analysis

Transfer Learning Attack

The central finding is that fine-tuning bio-foundation models recovers excluded biohazardous knowledge filtered out during pre-training, directly exploiting the transfer learning process to circumvent safety measures — the canonical ML07 threat of fine-tuning bypassing safety constraints.


Details

Domains
generative
Model Types
transformer
Threat Tags
grey_boxtraining_time
Datasets
viral sequence databases
Applications
biological sequence modelingvirus sequence analysisbioweapon risk assessment