Pre-trained Language Model for Biomedical Question Answering

09/18/2019
by   Wonjin Yoon, et al.
Korea University
0

The recent success of question answering systems is largely attributed to pre-trained language models. However, as language models are mostly pre-trained on general domain corpora such as Wikipedia, they often have difficulty in understanding biomedical questions. In this paper, we investigate the performance of BioBERT, a pre-trained biomedical language model, in answering biomedical questions including factoid, list, and yes/no type questions. BioBERT uses almost the same structure across various question types and achieved the best performance in the 7th BioASQ Challenge (Task 7b, Phase B). BioBERT pre-trained on SQuAD or SQuAD 2.0 easily outperformed previous state-of-the-art models. BioBERT obtains the best performance when it uses the appropriate pre-/post-processing strategies for questions, passages, and answers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

10/12/2020

BioMegatron: Larger Biomedical Domain Language Model

There has been an influx of biomedical domain-specific language models, ...
04/15/2021

Sequence Tagging for Biomedical Extractive Question Answering

Current studies in extractive question answering (EQA) have modeled sing...
12/04/2019

An Exploration of Data Augmentation and Sampling Techniques for Domain-Agnostic Question Answering

To produce a domain-agnostic question answering model for the Machine Re...
09/15/2021

Transformer-based Language Models for Factoid Question Answering at BioASQ9b

In this work, we describe our experiments and participating systems in t...
09/04/2019

From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project

AI has achieved remarkable mastery over games such as Chess, Go, and Pok...
12/31/2020

KART: Privacy Leakage Framework of Language Models Pre-trained with Clinical Records

Nowadays, mainstream natural language pro-cessing (NLP) is empowered by ...
04/14/2021

Event Detection as Question Answering with Entity Information

In this paper, we propose a recent and under-researched paradigm for the...

Code Repositories

bioasq-biobert

Pre-trained Language Model for Biomedical Question Answering


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Language models pre-trained on large-scale text corpora achieve state-of-the-art performance in various natural language processing (NLP) tasks when fine-tuned on a given task 

[13, 15, 4]. Language models have been shown to be highly effective in question answering (QA), and many current state-of-the-art QA models often rely on pre-trained language models [20]. However, as language models are mostly pre-trained on general domain corpora, they cannot be generalized to biomedical corpora [29, 8, 2, 1]. Hence, similar to using Word2Vec for the biomedical domain [14], a language model pre-trained on biomedical corpora is needed for building effective biomedical QA models.

Recently, Lee et al. [8]

have proposed BioBERT which is a pre-trained language model trained on PubMed articles. In three representative biomedical NLP (bioNLP) tasks including biomedical named entity recognition, relation extraction, and question answering, BioBERT outperforms most of the previous state-of-the-art models. In previous works, models were used for a specific bioNLP task

[28, 9, 24, 18]. However, the structure of BioBERT allows a single model to be trained on different datasets and used for various tasks with slight modifications in the last layer.

In this paper, we investigate the effectiveness of BioBERT in biomedical question answering and report our results from the 7th BioASQ Challenge [21, 7, 10, 11]. Biomedical question answering has its own unique challenges. First, the size of datasets is often very small (e.g., few thousands of samples in BioASQ) as the creation of biomedical question answering datasets is very expensive. Second, there are various types of questions including factoid, list, and yes/no questions, which increase the complexity of the problem.

We leverage BioBERT to address these issues. To mitigate the small size of datasets, we first fine-tune BioBERT on other large-scale extractive question answering datasets, and then fine-tune it on BioASQ datasets. More specifically, we train BioBERT on SQuAD [17] and SQuAD 2.0 [16] for transfer learning. Also, we modify the last layer of BioBERT so that it can be trained/tested on three different types of BioASQ questions. This significantly reduces the cost of using biomedical question answering systems as the structure of BioBERT does not need to be modified based on the type of question.

The contributions of our paper are three fold: 1) We show that BioBERT pre-trained on general domain question answering corpora such as SQuAD largely improves the performance of biomedical question answering models. Wiese et al. [25] showed that pre-training on SQuAD helps improve performance. We test the performance of BioBERT pre-trained on both SQuAD and SQuAD 2.0. 2) With only simple modifications, BioBERT can be used for various biomedical question types including factoid, list, and yes/no questions. BioBERT achieves the overall best performance on all five test batches of BioASQ 7b Phase B111http://participants-area.bioasq.org/results/7b/phaseB/, and achieves state-of-the-art performance in BioASQ 6b Phase B. 3) We further analyze the role of pre- and post-processing in our system and show that different strategies often lead to different results.

The rest of our paper is organized as follows. First, we introduce our system based on BioBERT. We describe task-specific layers of our system and various pre- and post-processing strategies. We present the results of BioBERT on BioASQ 7b (Phase B), which were obtained using two different transfer learning strategies, and we further test BioBERT on BioASQ 6b on which our system was trained.

2 Methods

In this section, we will briefly discuss BioBERT 222The source code for BioBERT is available at https://github.com/dmis-lab/biobert.[8] and our modifications 333The source code and pre-processed datasets are available at https://github.com/dmis-lab/bioasq-biobert. for the BioASQ Challenge (Figure 1).

Figure 1: Overview of our system.

2.1 BioBERT

Word embeddings are crucial for various text mining systems since they represent semantic and syntactic features of words [22, 14]. While traditional models use context-independent word embeddings, recently proposed models use contextualized word representations [13, 15, 4]. Among them, BERT [4], which is built upon multi-layer bidirectional Transformers [23], achieved new state-of-the-art results on various NLP tasks including question answering. BioBERT [8] is the first domain-specific BERT based model pre-trained on PubMed abstracts and full texts. BioBERT outperforms BERT and other state-of-the-art models in bioNLP tasks such as biomedical named entity recognition, relation extraction, and question answering [6, 19].

An input representation of BioBERT for a given token is composed of the corresponding token, segment, and position embeddings. BioBERT utilizes WordPiece embeddings [26] which use sub-word units to address the out-of-vocabulary (OOV) problem. Broken sub-word units are denoted by ## (e.g. organoid = organ + ##iod). Positional embeddings are learned during training and segment embeddings are used to mark the location of question and passage tokens in the input sequence. Following the design of BERT, a special token embedding for [CLS] was added to the beginning of every sequence to process yes/no type questions.

Figure 2: Example of a single sequence (Question-Passage pair) processed by the BioBERT.

2.2 Task-specific layer

The BioBERT model for QA is illustrated in Figure 2. Following the approach of BioBERT[8], a question and its corresponding passage are concatenated to form a single sequence which is marked by different segment embeddings. The task-specific layer for factoid type questions and the layer for list type questions both utilize the output of the passage whereas the layer for yes/no type questions uses the output of the first [CLS] token.

2.2.1 Factoid and List Questions

In (Bio)BERT, the only additional trainable parameters needed for factoid and list type questions are the softmax layer for a linear transformation of hidden vectors from BioBERT. Following the notation used in the BERT study, we denote the trainable start vector as

and the trainable end vector as where

denotes the hidden size of BioBERT. The probabilities of the

-th token being the start of the answer token and the -th token being the end of the answer token can be calculated by the following equations:

where denotes -th token representation from BioBERT and denotes the dot product between two vectors.

2.2.2 Yes/no Questions

We use the first [CLS] for the classification of yes/no questions. Here, we denote the representation of the [CLS] token from BioBERT as . The parameter learned during training is a sigmoid layer consisting of which is used for binary classification. The probability for the sequence to be “yes” is calculated using the following equation.

2.2.3 Loss

For the factoid/list question layer, we minimize during training, which is defined below. is the arithmetic mean of the and , which correspond to the negative log-likelihood for the correct start and end positions, respectively. The ground truth start/end positions are denoted as for the start token, and for the end token. The losses are defined as follows:

where iterates for a mini-batch of size .
For yes/no questions, the binary cross entropy between probability and the corresponding ground truth was used as the training loss.

2.3 Pre-processing

To solve the BioASQ 7b Phase B dataset as extractive question answering, the challenge datasets containing factoid and list type questions were converted into the format of the SQuAD datasets [17, 16]. For yes/no type questions, we used 0/1 labels for each question-passage pair.

The dataset in the SQuAD format consists of passages and their respective question-answer sets. A passage is an article which contains answers or clues for answers and is denoted as the context in the dataset. The length of a passage varies from a sentence to a paragraph. An exact answer may or may not exist in the passage, depending on the task. According to the rules of the BioASQ Challenge, all the factoid and list type questions should be answerable with the given passages [21]. An exact answer and its starting position are provided in the answers field. We used various sources including snippets and PubMed abstracts, as passages. Multiple passages attached to a single question were divided to form question-passage pairs, which increased the number of question-passage pairs. The predicted answers of the question-passage pairs which share the same question are later combined in the post-processing layer.

Yes/no type questions are in the same format as the questions in the SQuAD dataset. However, binary answers are given to yes/no type questions, rather than answers selected based on their location in passages. Instead of providing an exact answer and its starting position in the answers field, we marked yes/no type questions using the strings “yes” or “no” and the Boolean values “false” and “true” in the is_impossible

field. Since the distribution of yes/no answers in the training set is usually skewed, we undersampled the training data to balance the number of “yes” and “no” answers.

We used the following strategies for developing the datasets: Snippet as-is Strategy, Full Abstract Strategy, and Appended Snippet Strategy. 2em0pt •  Snippet as-is Strategy Using snippets in their original form is a basic method for filling passages. The starting positions of exact answers indicate the positional offsets of exact matching words. If a single snippet has more than one exact matching answer word, we form multiple question-passage pairs for the snippet.
•  Full Abstract Strategy In the Full Abstract Strategy, we use an entire abstract, including the title of an article, as a passage. Full abstracts are retrieved from PubMed using their provided PMIDs. The snippets field of the original dataset is used to find the location of the correct answer. First, we look for the given snippet (e.g., a sentence in a typical case) from the retrieved abstract. Then, we search for the offset of the first exact matching words in the snippet, and add it to the offset of the snippet in the paragraph. In this way, we can find a plausible location of the answer within the paragraph.
•  Appended Snippet Strategy The Appended Snippet Strategy is a compromise between using snippets as-is and full abstracts. We first search a given snippet from an abstract and concatenate sentences before and after the given snippet, forming sentences into a passage ( denotes the number of sentences in a snippet, which is usually 1).

2.4 Post-processing

Since our pre-processing step involves dividing multiple passages with a same single question into multiple question-passage pairs, a single question can have multiple predicted answers. The probabilities of predicted answers for question-passage pairs sharing the same question, were merged to form a single list of predicted answers and their probabilities for a question. The answer candidate with the highest probability is considered as the final answer for a given factoid type question. For list type questions, probability thresholding was the default method for providing answers. Answer candidates with a probability higher than the threshold were included in the answer list. However, a considerable number (28.6% of BioASQ 6b list type questions) of list type questions contain the number of required answers. From the training example “Please list 6 symptoms of Scarlet fever,” we can extract the number 6 from the given question. We extracted the number provided in the question and used it to limit the length of the answer list for the question. For questions that contain the number of answers, the extracted number of answers were yielded.

For factoid and list type questions, we also filtered incomplete answers. Answers with non-paired parenthesis were removed from the list of possible answers. Pairs of round brackets and commas at the beginning and end of answers were removed.

3 Experimental Setup

3.1 Dataset

For factoid and list type questions, exact answers are included in the given snippets, which is consistent with the extractive QA setting of the SQuAD [17] dataset. Only binary answers are provided for yes/no questions. For each question, regardless of the question type, multiple snippets or documents are provided as corresponding passages.

The statistics of the BioASQ datasets are listed in Table 1. A list type question can have one or more than one answer; question-context pairs are made for every answer of a list type question. In our pre-processing step, 3,722 question-context pairs were made from 779 factoid questions in the BioASQ 7b training set. For yes/no questions, we undersampled the training data to balance the number of “yes” and “no” answers.

About 28.2% of factoid type questions and 5.6% of list type questions in the BioASQ 7b training set do not have an answer in their corresponding snippets. We excluded unanswerable questions, following the approach of Wiese et al.[24].

Question Type BioASQ
# of Questions in
original datasets
# of Pre-processed
question-passage pairs
Version
Factoid 6b 618 3,121
7b 779 3,722
List 6b 485 6,896
7b 556 7,716
Yes/No 6b 612 5,921
7b 745 6,676
Table 1: Statistics of the BioASQ training set.

3.2 Training

Our system is composed of BioBERT, task-specific layers, and a post-processing layer. The parameters of BioBERT and a task-specific layer are trainable. Our training procedure starts with pre-training the system on the SQuAD dataset. The trainable parameters for factoid and list type questions were pre-trained on the SQuAD 1.1 dataset, and the parameters for yes/no type questions were pre-trained on the SQuAD 2.0 dataset. The pre-trained system is then fine-tuned on each task.

We tuned the hyperparameters on the BioASQ 4/5/6b training and test sets. We used a probability threshold of 0.42 as one of the hyperparameters for list type questions. The probability threshold was decided using the tuning procedure.

4 Results & Discussion

In this section, we first report our results for the BioASQ 7b (Phase B) Challenge, which are shown in Table 2. Please note that the results and ranks were obtained from the leaderboard of BioASQ 7b[3]. Then we evaluate our system and other competing systems on the validation set (BioASQ 6b). The results are presented in Table 3. Finally, we investigate the performance gain due to the sub-structures of the system (Table 5 and Table 6). Mean reciprocal rank (MRR) and mean average F-measure (

) were used as official evaluation metrics to measure the performance on factoid and list type questions from BioASQ, respectively. We reported strict accuracy (SAcc), lenient accuracy (LAcc) and MRR for factoid questions and mean average precision, mean average recall, and mean average F1 score for list questions

444For more details, please visit http://participants-area.bioasq.org/Tasks/b/eval_meas_2018/.. Since the label distribution was skewed, macro average F1 score was used as an evaluation metric for yes/no questions.

4.1 Results on BioASQ 7b

Our results on Task 7b (Phase B) of the BioASQ Challenge are reported in Table 2. Each participant can submit up to 5 systems per batch. We submitted 1 to 5 systems which use different combinations of pre- and post-processing strategies. We report the rankings and scores of our best performing system and those of other competing systems for each task in Table 2. Competing systems are the best and second best systems, other than our system, from distinct participants. Manually corrected gold-standard answers are not yet available at the time of writing; therefore, we report the scores based on the online leaderboard 555The official results of the competition will be provided at http://bioasq.org..

max width= Batch Yes/no Factoid List # of Systems Participating system Mac F1 Participating system MRR Participating system F1 1 (1) Ours 67.12 (1) Ours 46.37 (3) Ours 30.51 17 (2) auth-qa-1 53.97 (2) BJUTNLPGroup 34.83 (1) Lab Zhu,Fudan Univer 32.76 (3) BioASQ_Baseline 47.27 (3) auth-qa-1 27.78 (4) auth-qa-1 25.94 2 (1) Ours 83.31 (1) Ours 56.67 (1) Ours 47.32 21 (2) auth-qa-1 62.96 (3) QA1 40.33 (3) LabZhu,FDU 25.79 (4) BioASQ_Baseline 42.58 (4) transfer-learning 32.67 (5) auth-qa-1 23.21 3 (5) Ours 46.23 (6) Ours 47.24 (1) Ours 32.98 24 (1) unipi-quokka-QA-2 74.73 (1) QA1/UNCC_QA_1 51.15 (2) auth-qa-1 25.13 (3) auth-qa-2 51.65 (3) google-gold-input 50.23 (4) BioASQ_Baseline 22.75 4 (2) Ours 79.28 (1) Ours 69.12 (1) Ours 46.04 36 (1) unipi-quokka-QA-1 82.08 (4) FACTOIDS/UNCC… 61.03 (2) google-gold-input-nq 43.64 (8) bioasq_experiments 58.01 (9) google-gold-input 54.95 (9) LabZhu,FDU 32.14 5 (1) Ours 82.50 (1) Ours 36.38 (1) Ours 46.19 40 (2) unipi-quokka-QA-5 79.39 (3) BJUTNLPGroup 33.81 (6) google-gold-input-nq 28.89 (6) google-gold-input-ab 69.41 (4) UNCC_QA_1 33.05 (7) UNCC_* 28.62

Table 2: Batch results of the BioASQ 7b Challenge. We report the rank of the systems in parentheses.

4.2 Validating on the BioASQ 6b dataset

We compared the performance of existing systems and our system on the BioASQ 6b dataset from the last year (2018), which is shown in Table 3. We micro averaged the scores from five experiments and reported the scores in Table 3. Similarly, the leaderboard scores of the best performing system for each batch were micro averaged and reported as the Best System scores [5, 27, 12]. Our system obtained much higher scores on the BioASQ 6b dataset than the top systems from leaderboard of BioASQ 6b Challenge.

System Factoid (MRR) List (F1) Yes/no (Macro F1)
Best System 27.84 % 27.21 % 62.05 %
Ours 48.41 % 43.16 % 75.87 %
Table 3: Performance comparison between existing systems and our system on the BioASQ 6b dataset (from last year). Note that our system obtained a 20% to 60% performance improvement over the best systems.

4.2.1 Pre-training

In Table 4, we compare the performance of the pre-trained models. BioBERT fine-tuned on the BioASQ 6b dataset outperformed BERTBASE fine-tuned on BioASQ in both factoid and list type questions. BioBERT first pre-trained on SQuAD and then fine-tuned on BioASQ 6b obtained the best performance over other two experiments, demonstrating the effectiveness of pre-training BioBERT on SQuAD, a comprehensive and large-scale question answering corpus.

max width= Pre-trained models Factoid List SAcc LAcc MRR Prec Recall F1 BERTBASE+BioASQ Finetune 24.84% 36.03% 28.76% 42.41% 35.88% 35.37% BioBERT+BioASQ Finetune 34.16% 47.83% 39.64% 44.62% 39.49% 38.45% BioBERT+SQuAD+BioASQ Finetune 42.86% 57.14% 48.41% 51.58% 43.24% 43.16%

Table 4: Performance comparison between pre-trained models.

4.2.2 Pre-/Post-processing

The performance of our system is largely affected by how the data is pre-processed (Table 5). However, the effectiveness of the pre-processing strategy varies depending on the type of question. For example, the Appended Snippet strategy and Full Abstract strategy obtained good performance on factoid questions, while the Snippet As-is strategy achieved the highest performance on list and yes/no type questions. Table 6 shows the effect of post-processing on the performance of a system evaluated on list type questions. In our study, both extracting the number of answers from questions and filtering predicted answers were effective.

Strategy Factoid List Yes/no
SAcc LAcc MRR Prec Recall F1 MacroF1
Snippet 40.99 55.90 47.38 51.58 43.24 43.16 75.10
Full Abstract 42.86 57.14 48.41 42.66 32.58 33.52 66.76
Appended Snippet 39.75 58.39 48.00 44.04 41.26 39.36 -
Table 5: Performance comparison between pre-processing methods. Scores on the BioASQ 6b dataset.
Strategy Precision Recall F1
Baseline (Snippet) 51.58 43.24 43.16
Baseline without filter 50.79 43.24 42.64
Baseline without answer # extraction 50.01 44.32 42.58
Table 6: Ablation study on the post-processing methods. Scores for list type questions in the BioASQ 6b dataset.

4.2.3 Ensemble

Starting from test batch 4 of BioASQ 7b, we submitted model ensemble results as one of our systems. The performance gain of the model ensemble on our evaluation set was relatively small; the performance ranged from 0.2% to 2% depending on the task. The model ensemble improved the performance on factoid questions the most (2% gain), but applying the model ensemble to list questions did not obtain higher performance than the single model. Although the model ensemble obtained high scores in the BioASQ 7b Challenge, it could only obtain the highest score on factoid type questions in batch 5.

4.2.4 Qualitative Analysis

In Table 7, we show three predictions generated by our system on the BioASQ 6b factoid dataset. Due to the space limitation, we show only small parts of a passage, which contain the answers (predicted answers might be contained in other parts of the passage). We show the top five predictions generated by our system which can also be used for list type questions. In the first example, our system successfully finds the answer and other plausible answers. The second example shows that most of the predicted answers are correct and have only minor differences. In the last example, we observe that the ground truth answer does not exist in the passage. Also, the predicted answers are indeed correct despite the incorrect annotation.

No. Type Description
1 Question What causes “puffy hand syndrome?”
Passage Puffy hand syndrome is a complication of intravenous drug abuse,
which has no current available treatment.
Ground Truth “intravenous drug abuse”
Predicted Answer “intravenous drug abuse”,
“drug addiction”,
“Intravenous drug addiction”,
“staphylococcal skin infection”,
“major depression”
2 Question In which syndrome is the RPS19 gene most frequently mutated?
Passage A transgenic mouse model demonstrates a dominant negative effect
of a point mutation in the RPS19 gene associated
with Diamond-Blackfan anemia.
Ground Truth “Diamond-Blackfan Anemia”,
“DBA”
Predicted Answer “Diamond-Blackfan anemia”,
“Diamond-Blackfan anemia (DBA)”,
“DBA”,
“Diamond Blackfan anemia”,
“Diamond-Blackfan anemia. Diamond-Blackfan anemia”
3 Question What protein is the most common cause of hereditary renal amyloidosis?
Passage We suspected amyloidosis with fibrinogen A alpha chain deposits,
which is the most frequent cause of hereditary amyloidosis in Europe,
with a glomerular preferential affectation.
Ground Truth “Fibrinogen A Alpha protein”
Predicted Answer “fibrinogen”,
“fibrinogen alpha-chain. Variants of circulating fibrinogen”,
“fibrinogen A alpha chain (FGA)”,
“Fibrinogen A Alpha Chain Protein. Introduction: Fibrinogen”,
“apolipoprotein AI”
Table 7: Predictions by our BioBERT based QA system on the BioASQ 6b factoid dataset

The prediction result of list question from the BioASQ 6b is presented in Table 8. We found that our system is more likely to produce incorrect predictions on list questions than on factoid questions. Our system internally outputs a list of predictions and the list is likely to include prediction with erroneous span. Even though incorrect prediction (“JBP”) with erroneous span has a lower probability than the true prediction (“JBP1” and “JBP2”), it can have considerable absolute probabilities. On factoid questions, selecting a top one answer is required. Hence we can ignore incorrect prediction on factoid questions. On the contrary, on list questions, prediction with erroneous span gets higher probability through merging predictions in post-processing step. Since our model utilizes fixed threshold value, prediction with erroneous span is imperfect but achieved a higher possibility than the threshold.

No. Type Description
1 Question Which enzymes are responsible for base J creation in Trypanosoma brucei?
Passage JBP1 and JBP2 are two distinct thymidine hydroxylases involved in
J biosynthesis in genomic DNA of African trypanosomes.
Here we discuss the regulation of hmU and base J formation in the
trypanosome genome by JGT and base J-binding protein.
Ground Truth “JBP1”,
“JBP2”,
“JGT”
Predicted Answer “JBP1”,
“JBP”,
“thymidine hydroxylase”,
“JGT”,
“hmU”,
“JBP2”
Table 8: Prediction by our BioBERT based QA system on the BioASQ 6b list dataset

5 Conclusion

In this paper, we proposed BioBERT based QA system for the BioASQ biomedical question answering challenge. As the size of the biomedical question answering dataset is very small, we leveraged pre-trained language models for biomedical domain which effectively exploit the knowledge from large biomedical corpora. Also, while existing systems for the BioASQ challenge require different structures for different question types, our system uses almost the same structure for various question types. By exploring various pre-/post-processing strategies, our BioBERT based system obtained the best performance in the 7th BioASQ Challenge, achieving state-of-the-art results on factoid, list, and yes/no type questions. In future work, we plan to further systematically analyze the incorrect predictions of our systems, and develop biomedical QA systems that can eventually outperform humans.

Acknowledgements

We appreciate Susan Kim for editing the manuscript.

Funding

This work was funded by the National Research Foundation of Korea (NRF-2017R1A2A1A17069645, NRF-2016M3A9A7916996) and the National IT Industry Promotion Agency grant funded by the Ministry of Science and ICT and Ministry of Health and Welfare (NO. C1202-18-1001, Development Project of The Precision Medicine Hospital Information System (P-HIS)).

References

  • [1] E. Alsentzer, J. R. Murphy, W. Boag, W. Weng, D. Jin, T. Naumann, and M. McDermott (2019) Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323. Cited by: §1.
  • [2] I. Beltagy, A. Cohan, and K. Lo (2019) SciBERT: pretrained contextualized embeddings for scientific text. arXiv preprint arXiv:1903.10676. Cited by: §1.
  • [3] (May, 2019) BioASQ participants area bioasq. External Links: Link Cited by: §4.
  • [4] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §2.1.
  • [5] D. Dimitriadis and G. Tsoumakas (2019) Word embeddings and external resources for answer processing in biomedical factoid question answering. Journal of biomedical informatics 92, pp. 103118. Cited by: §4.2.
  • [6] D. Kim, J. Lee, C. H. So, H. Jeon, M. Jeong, Y. Choi, W. Yoon, M. Sung, and J. Kang (2019) A neural named entity recognition and multi-type normalization tool for biomedical text mining. IEEE Access 7, pp. 73729–73740. Cited by: §2.1.
  • [7] A. Krithara, A. Nentidis, G. Paliouras, and I. Kakadiaris (2016-08) Results of the 4th edition of BioASQ challenge. In Proceedings of the Fourth BioASQ workshop, Berlin, Germany, pp. 1–7. External Links: Link, Document Cited by: §1.
  • [8] J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, and J. Kang (2019-09) BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. External Links: ISSN 1367-4803, Document Cited by: §1, §1, §2.1, §2.2, §2.
  • [9] S. Lim and J. Kang (2018)

    Chemical–gene relation extraction using recursive neural network

    .
    Database 2018. Cited by: §1.
  • [10] A. Nentidis, K. Bougiatiotis, A. Krithara, G. Paliouras, and I. Kakadiaris (2017) Results of the fifth edition of the bioasq challenge. In BioNLP 2017, pp. 48–57. Cited by: §1.
  • [11] A. Nentidis, A. Krithara, K. Bougiatiotis, G. Paliouras, and I. Kakadiaris (2018-11) Results of the sixth edition of the BioASQ challenge. In Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answering, Brussels, Belgium, pp. 1–10. External Links: Link Cited by: §1.
  • [12] S. Peng, Y. Zhang, R. You, Z. Xie, B. Wang, and S. Zhu (2015) The fudan participation in the 2015 bioasq challenge: large-scale biomedical semantic indexing and question answering. In CEUR Workshop Proceedings, Vol. 1391. Cited by: §4.2.
  • [13] M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2227–2237. Cited by: §1, §2.1.
  • [14] S. Pyysalo, F. Ginter, H. Moen, T. Salakoski, and S. Ananiadou (2013) Distributional semantics resources for biomedical text processing. Proceedings of LBM, pp. 39–44. Cited by: §1, §2.1.
  • [15] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever (2018)

    Improving language understanding with unsupervised learning

    .
    Technical report Technical report, OpenAI. Cited by: §1, §2.1.
  • [16] P. Rajpurkar, R. Jia, and P. Liang (2018) Know what you don’t know: unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Cited by: §1, §2.3.
  • [17] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang (2016) Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Cited by: §1, §2.3, §3.1.
  • [18] A. Rosso-Mateus, F. A. González, and M. Montes-y-Gómez (2018) MindLab neural network approach at bioasq 6b. In Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answering, pp. 40–46. Cited by: §1.
  • [19] D. Sousa, A. Lamurias, and F. M. Couto (2019) Using neural networks for relation extraction from biomedical literature. arXiv preprint arXiv:1905.11391. Cited by: §2.1.
  • [20] A. Talmor and J. Berant (2019) MultiQA: an empirical investigation of generalization and transfer in reading comprehension. arXiv preprint arXiv:1905.13453. Cited by: §1.
  • [21] G. Tsatsaronis, G. Balikas, P. Malakasiotis, I. Partalas, M. Zschunke, M. R. Alvers, D. Weissenborn, A. Krithara, S. Petridis, D. Polychronopoulos, et al. (2015) An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. BMC bioinformatics 16 (1), pp. 138. Cited by: §1, §2.3.
  • [22] J. Turian, L. Ratinov, and Y. Bengio (2010)

    Word representations: a simple and general method for semi-supervised learning

    .
    In Proceedings of the 48th annual meeting of the association for computational linguistics, pp. 384–394. Cited by: §2.1.
  • [23] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §2.1.
  • [24] G. Wiese, D. Weissenborn, and M. Neves (2017) Neural domain adaptation for biomedical question answering. arXiv preprint arXiv:1706.03610. Cited by: §1, §3.1.
  • [25] G. Wiese, D. Weissenborn, and M. Neves (2017) Neural question answering at bioasq 5b. arXiv preprint arXiv:1706.08568. Cited by: §1.
  • [26] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. (2016)

    Google’s neural machine translation system: bridging the gap between human and machine translation

    .
    arXiv preprint arXiv:1609.08144. Cited by: §2.1.
  • [27] Z. Yang, Y. Zhou, and E. Nyberg (2016) Learning to answer biomedical questions: oaqa at bioasq 4b. In Proceedings of the Fourth BioASQ workshop, pp. 23–37. Cited by: §4.2.
  • [28] W. Yoon, C. H. So, J. Lee, and J. Kang (2019) CollaboNet: collaboration of deep neural networks for biomedical named entity recognition. BMC bioinformatics 20 (10), pp. 249. Cited by: §1.
  • [29] H. Zhu, I. C. Paschalidis, and A. Tahmasebi (2018) Clinical concept extraction with contextual word embedding. arXiv preprint arXiv:1810.10566. Cited by: §1.