Pre-trained Language Model for Biomedical Question Answering

09/18/2019
by   Wonjin Yoon, et al.
0

The recent success of question answering systems is largely attributed to pre-trained language models. However, as language models are mostly pre-trained on general domain corpora such as Wikipedia, they often have difficulty in understanding biomedical questions. In this paper, we investigate the performance of BioBERT, a pre-trained biomedical language model, in answering biomedical questions including factoid, list, and yes/no type questions. BioBERT uses almost the same structure across various question types and achieved the best performance in the 7th BioASQ Challenge (Task 7b, Phase B). BioBERT pre-trained on SQuAD or SQuAD 2.0 easily outperformed previous state-of-the-art models. BioBERT obtains the best performance when it uses the appropriate pre-/post-processing strategies for questions, passages, and answers.

READ FULL TEXT
research
10/12/2020

BioMegatron: Larger Biomedical Domain Language Model

There has been an influx of biomedical domain-specific language models, ...
research
04/15/2021

Sequence Tagging for Biomedical Extractive Question Answering

Current studies in extractive question answering (EQA) have modeled sing...
research
12/04/2019

An Exploration of Data Augmentation and Sampling Techniques for Domain-Agnostic Question Answering

To produce a domain-agnostic question answering model for the Machine Re...
research
09/15/2021

Transformer-based Language Models for Factoid Question Answering at BioASQ9b

In this work, we describe our experiments and participating systems in t...
research
04/15/2022

Mixture of Experts for Biomedical Question Answering

Biomedical Question Answering (BQA) has attracted increasing attention i...
research
03/31/2022

Leveraging pre-trained language models for conversational information seeking from text

Recent advances in Natural Language Processing, and in particular on the...
research
04/10/2022

Data Augmentation for Biomedical Factoid Question Answering

We study the effect of seven data augmentation (da) methods in factoid q...

Please sign up or login with your details

Forgot password? Click here to reset