BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine

08/18/2023
by   Yizhen Luo, et al.
0

Foundation models (FMs) have exhibited remarkable performance across a wide range of downstream tasks in many domains. Nevertheless, general-purpose FMs often face challenges when confronted with domain-specific problems, due to their limited access to the proprietary training data in a particular domain. In biomedicine, there are various biological modalities, such as molecules, proteins, and cells, which are encoded by the language of life and exhibit significant modality gaps with human natural language. In this paper, we introduce BioMedGPT, an open multimodal generative pre-trained transformer (GPT) for biomedicine, to bridge the gap between the language of life and human natural language. BioMedGPT allows users to easily “communicate” with diverse biological modalities through free text, which is the first of its kind. BioMedGPT aligns different biological modalities with natural language via a large generative language model, namely, BioMedGPT-LM. We publish BioMedGPT-10B, which unifies the feature spaces of molecules, proteins, and natural language via encoding and alignment. Through fine-tuning, BioMedGPT-10B outperforms or is on par with human and significantly larger general-purpose foundation models on the biomedical QA task. It also demonstrates promising performance in the molecule QA and protein QA tasks, which could greatly accelerate the discovery of new drugs and therapeutic targets. In addition, BioMedGPT-LM-7B is the first large generative language model based on Llama2 in the biomedical domain, therefore is commercial friendly. Both BioMedGPT-10B and BioMedGPT-LM-7B are open-sourced to the research community. In addition, we publish the datasets that are meticulously curated for the alignment of multi-modalities, i.e., PubChemQA and UniProtQA. All the models, codes, and datasets are available at <https://github.com/PharMolix/OpenBioMed>.

READ FULL TEXT
research
10/19/2022

BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining

Pre-trained language models have attracted increasing attention in the b...
research
08/27/2023

Examining User-Friendly and Open-Sourced Large GPT Models: A Survey on Language, Multimodal, and Scientific GPT Models

Generative pre-trained transformer (GPT) models have revolutionized the ...
research
04/27/2023

PMC-LLaMA: Further Finetuning LLaMA on Medical Papers

Large Language Models (LLMs) have showcased remarkable capabilities in n...
research
02/24/2021

From Universal Language Model to Downstream Task: Improving RoBERTa-Based Vietnamese Hate Speech Detection

Natural language processing is a fast-growing field of artificial intell...
research
06/26/2023

ViNT: A Foundation Model for Visual Navigation

General-purpose pre-trained models ("foundation models") have enabled pr...
research
04/22/2020

Logical Natural Language Generation from Open-Domain Tables

Neural natural language generation (NLG) models have recently shown rema...
research
05/20/2023

AnyPredict: Foundation Model for Tabular Prediction

Foundation models are pre-trained on massive data to perform well across...

Please sign up or login with your details

Forgot password? Click here to reset