Progress Notes Classification and Keyword Extraction using Attention-based Deep Learning Models with BERT

10/13/2019 ∙ by Matthew Tang, et al. ∙ University of Illinois at Urbana-Champaign Indiana University IUPUI 0

Despite recent advances in the application of deep learning algorithms to various kinds of medical data, clinical text classification, and extracting information from narrative clinical notes remains a challenging task. The challenges of representing, training and interpreting document classification models are amplified when dealing with small and clinical domain data sets. The objective of this research is to investigate the attention-based deep learning models to classify the de-identified clinical progress notes extracted from a real-world EHR system. The attention-based deep learning models can be used to interpret the models and understand the critical words that drive the correct or incorrect classification of the clinical progress notes. The attention-based models in this research are capable of presenting the human interpretable text classification models. The results show that the fine-tuned BERT with the attention layer can achieve a high classification accuracy of 97.6 higher than the baseline fine-tuned BERT classification model. Furthermore, we demonstrate that the attention-based models can identify relevant keywords that strongly relate to the corresponding clinical categories.



There are no comments yet.


page 4

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Deep learning algorithms have been applied to different tasks of text mining and natural language processing, such as identifying parts of speech

[17] [13], entity extraction [3] [24]

, sentiment analysis

[34], text classification [9], and other aspects of text [2]. In recent years, applications of deep learning and text mining algorithms to the medical data have gained a lot of attention. Researches have been done on making use of EHR clinical notes for clinical decision support. Typically, the ‘free-text’ clinical notes include discharge summaries, nursing reports and progress notes, which contain patients medical history, family history, treatment history, and so on. Managing and extracting key information from the clinical notes by using learning algorithms are always challenging. One of the reasons is that the amount of publicly available clinical text data is often limited.

In this research, we develop attention-based deep learning models for classifying a set of clinical progress notes which belongs to 12 different clinical categories. These progress notes are extracted from a large institutional health care center. We build attention-based deep learning models to classify the progress notes. The models are tested on their ability to classify the progress note to the corresponding categories based on their content. Most of the deep learning models require a large amount of training data. We employ the transfer learning through making use of the word or token embedding from pre-trained models trained on extensive text collections. To investigate how the attention-based deep learning models perform for progress notes classification, we train and evaluate the attention based approach with several deep learning models, including the most recent language model BERT


, and a bidirectional long short-term memory (BiLSTM) model

[10]. The results show that the BERT model with an additional attention layer can achieve a high classification accuracy of 97.6%, which is higher than the base fine-tuned BERT classification model.

Typical deep neural networks perform as “black boxes”. It is hard to extract the details on how they represent and process information. The attention-based deep learning models have a built-in mechanism which can be used to identify the keywords that drive the neural network to predict a progress note into a clinical category. We investigate both token embedding and word embedding for attention weight calculation to extract the keywords for interpreting classification reasoning. To visualize the keywords for reasoning, we select correctly classified sentences with highlight the keywords that have high attention weights. Then, we calculate the top frequent words for each category to demonstrate whether the attention mechanism is valid to identify the important words. These analysis yields insights into the models’ interpretation of the clinical notes, allowing us to automatically extract keywords of sentences that are most relevant to the corresponding category.

The rest of the paper is organized as follows: Section II presents related work; system design and models are detailed in Section III; Section IV provides the data set description; Experimental results are given in Section V; Section VI concludes the work and list some future work.

Ii Background and Related Work

A wide variety of machine learning models have been applied to text document classification such as k-Nearest-Neighbor


, support vector machines (SVMs)


, convolutional neural networks (CNNs)


and recurrent neural networks (RNNs)

[18]. In the clinical domain, document classification algorithms have been used to predict cancer stage information in clinical records [33], to classify radiology reports by using ICD-9 code [7] and to classify whether a patient has psychological stress or no [28]. The most recent research shows that categorizing ‘free text’ clinical notes is often related to other tasks of analyzing the content of Electronic Health Record (EHR) for decision support. The tasks include information extraction and information representation generation. Information extraction often refers to biomedical concept and event extraction, such as extracting gene expression [38], symptoms [11], diseases (including abbreviations) [16] and drug to drug interaction [36] [30] and so on. Other text analyses and NLP applications in the clinical field are relevant to clinical outcome prediction [23].

In recent years, the distributed representation of words or concepts which is called embedding gained interest in the research areas of text mining, natural language processing, and health informatics

[20] [21] [25]. The embedding has been studied for biomedical text classification, clustering [25] [39] and biomedical entity extraction [32], where a word is a basic unit for the text documents and the word embedding is learned through neural networks including CNN or LSTM. The most recent text embedding is BERT [5] which consists of a multi-layer bidirectional Transformer encoder. BERT process text as tokens since it is trained on unsupervised tasks to predict masked tokens. To the best of the authors’ knowledge, BERT has not been investigated for clinical text classification and keyword extraction.

Both LSTM and Bidirectional LSTM has been widely used to for biomedical or chemical entity recognition and extraction from the textual data [19] [27] [31]. However, it is rarely seen that LSTM or BiLSTM has been used for clinical text classification and keywords extraction. Although the most recent research used BERT and BiLSTM for medical text inference [15].

One of the most recent research used the attention mechanism with LSTM to predict daily sepsis, myocardial infarction (MI), and vancomycin antibiotic administration through analyzing patients’ ICU data in the MIMIC-III data set [14]. This research demonstrates that the attention mechanism can extract the influential input variables that are related to the predictions. Different from the previous study where the attention mechanism is used for classifying data, we aim to extract the keywords in the text that drive the classification using the attention mechanism. The attention mechanism approach enables human interpretable clinical text classification.

Iii System Design and Models

In this research, we develop and evaluate the neural attention mechanism with two different embeddings and text classification strategies. The attention layer is employed for each model before the classification layer to extract the keywords that highly influence the classification. Figure 1 shows our system design. The input to the system is words or tokens. The first layer is to convert the word to embeddings. The second layer is the information processing layer, which can be a designed neural network, such as BiLSTM to process in the embeddings. The embedding layers can be BERT, Word2Vec or other embeddings which fits more the application domain. The attention layer is before the classification layer, which is used to identify the importance of the words for text classification. The following subsection presents the details of the text embedding, text classification models, and attention mechanism for keywords extraction.

Fig. 1: Attention Based Model System Design

Iii-a Bert

The BERT model architecture is based on a multi-layer Transformer encoder, which was originally implemented by Vaswani et al. [26]. Devlin et al. [5] introduced the BERT Transformer based on using bidirectional self-attention. This bidirectional mechanism removes the restrictions that self-attention can only incorporate the context from one side: left or right. Different from other embedding generation architecture, such as Word2Vec [20], the input to the BERT model are not vectors that represent words. Instead, the input includes token, segment, and position embeddings. The token embedding is WordPiece embeddings [29] that contains 30k tokens.

The base BERT model is pre-trained using two unsupervised tasks: (1) Masked Language Model (LM) - a task to predict some random masked tokens in the input. The objective is to train bidirectional encoder. (2) Next Sentence Prediction (NSP) - a task to predict the following sentence of the input sentence. The objective is to understand sentence relationships, so that the pre-trained BERT model can be better fit to other NLP applications, such as Question Answering (QA) and Natural Language Inference (NLI) where sentence relationships are crucial. In this research, we make use of the base BERT model that is in TensorFlow Hub

[6]. It has 12 transformer blocks, 12 self-attention heads, and the hidden size of 768.

The BERT base model can be fine-tuned for text classification by simply adding a softmax classification layer on top of the BERT model to predict the class of a given text sequence, as Equation 1

. The input to the softmax layer is the last hidden layer output H of the first token that represents the original text sequence.



are parameters for the classification layer. They are fine-tuned with all the parameters from BERT to maximize the log-probability of the correct label.

Iii-B BERT with Attention Layer

Although the fine-tuned base BERT model can be used for text classification, however, it is not easy to extract the keywords that drive the decision process of the model. Hence, we propose adding an attention layer before to capture the attention of the neural network on each token. With base BERT model, there are two output options to connect the model with a specific language task. One is sentence level output, the other is token level output. In this research, the attention layer is built on top of the token level output. The embedding representation of the tokens then concatenated into vectors to present a document. The number of neurons of the attention layer is defined by the maximum number of tokens in the text collection. The output of the attention layer then connects to the classification layer by applying the

activation function. By applying the activation function, the relationship between tokens and the output can be well captured.

In this case, the attention weight of each token is defined by the of the output of the attention layer, which is used to identify the importance of tokens. It is worth noting that the attention weights are not directly applied to the classification layer.


Long Short term memory(LSTM) network [8]

is a type of Recurrent neural networks (RNN) which is capable of connecting the previous data input to perform the current operation. Unlike traditional RNN, LSTM contains four gates interacting with each other in different ways. Each LSTM cell considers three inputs: the current input, the preceding state and the output of the preceding state. LSTM network relies on the state of its cells, and the state of the cells are updated based on four gates: the forget gate which removes the irrelevant data that has been received from the preceding hidden state; the input gate which determines which values are to be updated; the input modulation gate is where a vector is created with new values known as candidate values are generated to be added to the current cell state later; and the output gate which determines what information to output. LSTM solves the vanishing gradient problem the traditional RNN. To extend its capabilities on connecting input from two direction, bi-directional LSTM (BiLSTM) was introduced

[10]. BiLSTM comprises of two independent LSTM networks to generate an output for a given input . One network traverses the information from the past to future, known as forward pass and another network traverses the information from the future to the past, known as reverse pass . The element-wise sum operation is used to combine the outputs of forward pass and backward pass, given as Equation 2.


BiLSTM has been used for text classification [37]. Typically, the input the BiLSTM are word embedding. The number of neurons in each layer is the maximum length of the input document measured by words. The bi-directional nature of the BiLSTM incorporates the context from both sides of an input word sequence.

Iii-D BiLSTM with Attention Layer

In this research, we investigate the attention layer on top of the BiLSTM layer to capture important words that drive the decisions of the document classification. In the attention layer, we introduce an attention weight matrix (Equation 3) to calculate the relationships between the current target word and all words in the document. The attention weight is calculated as the weighted sum of output vectors of BiLSTM, . The is the transpose of the trained parameter vector. The attention weight reflects the importance of the words for the classification output.


The output of the attention layer and the output of the BiLSTM will then be used to calculate the context vector , which is defined as Equation 4, to feed to the classification layer which is used to predict the category of the input document.


Iii-E Attention Weights and Keywords Extraction

Through attention weights obtained from the attention layer, we extract the keywords from each input text documents. Given a sequence of attention weights obtained from an input document, first, we identify the word or token that has highest attention weight . Then, we calculate the difference between the rest of the attentions to . The value of the percentile of the difference can be used as a threshold (Equation 5) to find the important words or tokens for the input document. The tokens are then combined into words. We notice that with BERT model, for some situation, not all tokens of a word have the same level of attention weights. In that situation, if one of the tokens has attention weight makes it pass the threshold, the whole word is extracted as keywords. For all implemented models in this research, is set to be 10.


Iv Data Set Description

The data set used in this study was extracted from a large academic medical center’s EHR system. In total, there are 3981 clinical progress notes extracted. These progress notes belong to 12 different categories. Table I shows the number of notes in each category. The progress notes are ‘free text’ documents with many different sections written in them, such as ‘date’, ‘patient name’,‘gender’, ‘age’, ‘medication’, ‘allergies’, ‘history of present illness’ and so on. In this research, we use ‘history of present illness’ section, which contains the most information as ‘free text’ to demonstrate our system. The length of the ‘history of the present illness’ section based on the number of words varies. However, the majority of the document contains less than 300 words for that section. Figure 2 shows the distribution of the documents with less than 300 words by length. Since the base BERT model can only process documents with less than 512 tokens, we choose the documents with less than 250 words for this research.

Category Document No.
Breast Care 1965
Urology 98
Bariatics 33
Dermatology 75
Endo-Diab 263
Geriatrics 45
GI-Gen 55
Nephrology 48
Orthopedics 253
Pain Management 42
Pulmonary 86
Sleep Med 36
TABLE I: Clinical Progress Notes in the Categories

Fig. 2: Length distribution of Progress Notes of our dataset

V Experimental Results

In this research, we implement three different models with the attention mechanism. One is BERT with attention layer (FT-BERT+Att), the other two is BiLSTM with attention layer using different embeddings. One uses a simple one-hot encoder embedding (OE+Att+BiLSTM), the other one uses pre-trained BERT based token embedding (PT-BERT+Att+BiLSTM). The reason to consider these two embeddings is to investigate how much impact of the embedding has on the performance of text classification and keyword extraction. One-hot encoder embedding captures very minimum semantic relationships between words, and the token embedding represents smaller units than words.

V-a Classification Accuracy

Table II shows the classification accuracy of the implemented models. We also compare the basic fine-tuned BERT without attention layer to demonstrate the state-of-the-art text classification using BERT. The results show that fine-tuned BERT model works better than the other BiLSTM based models. The fine-tuned BERT model with attention layer works the best. We also noticed that the high accuracy is gained mainly because of the accuracy on category Breast Care is high by using all three models. All models do not work well with the categories that have a smaller amount of instances, such as Bariatrics.

Models Training Test
Basic Fine-Tuned BERT Text Classification 99.8% 95.5%
Fine-Tuned BERT+Attention Layer (FT-BERT+Att) 99.9% 97.6%
Pre-trained BERT+Attention Layer+BiLSTM 95.6% 93.8%
One-hot Encoder+Attention Layer+BiLSTM 90.2% 94.2%
TABLE II: Classification Accuracy of the Deep Learning Models

V-B Keywords Extraction based on Attention Weights

V-B1 Visualization of the Keywords with High Attention Weights

Figure 3 to Figure 5 shows the visualization of the keywords with attention weights over the specified the threshold in sentences with respect to the classification of the document by using FT-BERT+Att Model, PT-BERT+Att+BiLSTM model, and OE+Att+BiLSTM, respectively. Sentences in the documents that are correctly classified are colored green, while sentences that are misclassified is colored red. The saturation of the colors corresponds to the attention weights calculated by the models.

Figure 3 presents some examples of the sentence with the high attention words identified by the FT-BERT+Att Model. The first sentence is correctly classified to category Orthopedics, which is the branch of medicine dealing with the correction of deformities of bones or muscles. The most important keywords identified by attention weights are: ‘Twyla’, ‘female’, ‘injection’, and ‘knee’. Word ‘knee’ is very related to this category comparing to the other three words. ‘Twyla’ is the name of the patient; however, its attention weight is relatively high in the document. We checked that among 253 documents, 13 of them mentioned this patient name in the ‘history of present illness’ section. The frequency of it might cause its high attention weight. The second sentence is correctly classified to category Breast Care, which is the clinic offers breast care health, including screening, diagnosis, treatment options, symptom management, and so on. The words that have high attention weights are ‘invasive’, ‘ductal’, ‘carcinoma’. These words together is a specific type of breast cancer. The third sentence is correctly classified to category Endo-Diab, which is a branch of medicine that deals with hormones and glands that produce them. Only one word ‘parathyroid’ has high attention weight, and it is a type of hormone which is usually seen by the physician of Endo-Diab. The last sentence, which is colored in red, is misclassified to Breast Care. The correct category should be Pulmonary. Based on the content and the identified keywords, we can tell that the attention word ‘breast’ mislead the classification result.

Fig. 3: Keywords with High Attentions Weights using FT-BERT+Att Model

In Figure 4 presents some examples of sentence with the high attention words identified by the PT-BERT+Att+BiLSTM model. The first sentence is correctly classified to category Pain Management, which is the branch of medicine that applies science to the reduction of pain. The most important keyword identified in this example is ‘pain’, which is very related to this category. The second sentence is correctly classified to category Nephrology, which is a branch of medical science that deals with diseases of the kidneys. The word that has high attention weight is ‘hemoglobin’ which is a type of blood test. It relates to chronic kidney disease (CKD). The third sentence is correctly classified to category Breast Care. The identified high attention words ‘woman’ and ‘breast cancer’ are very related to this category. The last example was misclassified to GI-Gen, the attention keywords are ‘changes’, ‘traumatic’, ‘foley’, and ‘patient’ which are not strongly related to the true category Dermatology.

Fig. 4: Keywords with High Attentions Weights using PT-BERT+Att+BiLSTM Model

Figure 5 presents some examples of sentence with the high attention words identified by OE+Att+BiLSTM Model. The first sentence is correctly classified to category Dermatology, which is the branch of medicine conducts clinical and basic investigations of skin biology and researches the diagnosis and treatment of skin disease. The most important keywords identified by attention weights are: ‘breakouts’, ‘mouth’, and ‘doxycycline’. Word ‘doxycycline’ is the type of medicine being used to treat many different bacterial infections, such as acne. The ‘breakouts’ and ‘month’ correlate to ‘doxycycline’ in this case, so they also have high attention weights. The second sentence is correctly classified to category Sleep Medicine is the medical branch devotes to the diagnosis and therapy of sleep disturbances and disorders. The words that have high attention weights are ‘patient’, ‘bipap’, ‘hypercapinc’, and ‘respiratory’. The ‘bipap’ is a sleep apnea treatment. The ‘hypercapnic respiratory failure’ is related to ‘bipap’ in this case. The third sentence is correctly classified to category Breast Care. The high attention words, ‘biopsy’, ‘invasive’, and ‘carcinoma’ are all highly related to breast cancer. The last sentence, which is colored in red, is misclassified to Endo Diab. The correct category should be Pulmonary. Although the content has ‘lung cancer’ and somehow related to Pulmonary, however, ‘lung’ is not captured as attention word in this case. Hence, it is misclassified.

Fig. 5: Keywords with High Attentions Weights using OE+Att+BiLSTM Model
Methods Categories
Breast Care Urology Bariatics Dermatology Endo-Diab Geriatrics
FT-BERT+Att carcinoma patient levels old preg today
breast urinary gas history last pain
negative today diet allergies patient staff
left ambulatory states presents diabetes day
showed hematuria weight breast follow presents
patient seen well none cancer daily
biopsy procedure food oral been sleep
carcinomy states walking skin hyper headache
invasive year pain past thyroid nursing
mass report vomiting right met old
PT-BERT breast states food skin type pain
+Att+BiLSTM left prior diet history breast thigh
ductal urinary vitamins states cancer neuropathy
mastectomy last problems cancer visit leg
chemotherapy cystoscopy bowel lesion medication hurt
completed atb protein breast high concern
underwent reports regarding derm metformin area
biopsy nocturia levels upper surgery tigan
node urine walking thigh mcg sinusutitis
OE+Att+BiLSTM breast old numbness old old wants
pr retention tingling diabetes diabetes everplus
carcinoma year bilateral years year ability
mastectomy male mostly presents hypothyroidism 300mg
negative taken recurrent abdomen follow walk
biopsy urgency hand months type feet
grade urinary symptoms keratosis symptoms rehab
positive performing years underwent patient feels
mass cancer years clinic back know
retention resident find female patient progress
TABLE III: Top 10 Frequent Words of Categories: Breast Care, Urology, Bariatics, Dermatology, Endo-Diab, Geriatrics
Methods Categories
GI-Gen Nephrology Orthopedics Pain Mgmt Pulmonary Sleep Med
FT-BERT+Att colonoscopy urine pain pain copd sleep
diarrhea uropathy patient follow history chemotx
pain originally left rated last apnea
dysphagia history state initial follow epworth
recreational pleasant knee bilateral returns polyp
colon year well today old sleepiness
vomiting colon denies treatment visit better
ago cancer right factors today patient
negative symptoms fracture spasms denies time
past mild returns walking cough night
PT-BERT pain creatinine pain severity female cpap
+Att+BiLSTM colonoscopy reviewed female pain sleep quality
denied nonsteroidal knee fluctuates cough use
cancer hemoglobin numbness weather cancer osa
vomiting urinary fracture treatment use download
showed sodium presenting mod winded uses
nausea cancer last bilaterally shortness used
diarrhea urosepsis symptoms factors obstructive sputum
fever ureter presents extremity dyspnea pressure
OE+Att+BiLSTM abdominal old old pain old takes
never september denies severe x2 contiue
discomfort ultrasound female moderate cancer sleepy
diabetes a1c today mod coil due
colonoscopy kidney reports using experience
chills previous presenting cabg narcotics
fine initial follow chemotherapy lots
visit right diabetes back
routine year time
bilateral quite chest
potassium improved returns
TABLE IV: Top 10 Frequent Words of Categories: GI-Gen, Nephrology, Orthopedics, Pain Mgmt, Pulmonary, Sleep Med

V-B2 Frequent Keywords of Each Category

Based on the example analysis, we can tell that the attention weight-based keywords extraction might also extract some words that are not directly related to the category. Each model can include different words that are not directly related to the category. So, we investigate the top frequent keywords of each category by using these three models. Table II and III show the top 10 frequent words of the different categories after removing the stop words. For the category Breast Care, which has the most number of documents, there 6 to 7 of the frequent words identified by each model are directly related to the category. There are some keywords identified by all three models, such as ‘breast’, ‘carcinoma’, and ‘biopsy’.

The results show that for other categories, only 1 to 3 words are identified by all three models, and those words are normally highly related to the corresponding category, such as ‘urinary’ for category Urology, ‘skin’ for Dermatology and so on. It is found that there are more overlapped keywords between FT-BERT+Att model and PT-BERT+Att+BiLSTM model. The reason could be that they both based on the token embeddings. The OE+Att+BiLSTM model captures much less related keywords than the other two models for most of the categories. For some category, such as Bariatrics, OE+Att+BiLSTM model does not capture any directly related words to the category. The classification accuracy for that category is very low as 33%. On the other hand, after applying the attention threshold and removing the stop words, OE+Att+BiLSTM model can’t identify ten frequent words for some categories, such as Pain Management category. We hypothesize this is because the one-hot encoder embedding captures no semantic relationships between different words. Hence, the decision is often based on the repetition of the same words in the category. For example, ‘old’ occurs many times in different categories; it is identified as a keyword with high attention weight for different categories.

Vi Discussion

Both classification and keyword extraction results demonstrate that the attention-based deep learning models are capable of clinical text classification. Without using the attention layer, fine-tuned BERT model can also achieve high accuracy in classification. With attention layer, the fine-tuned BERT performs better than the other models on text classification. The objective of the attention layer is to extract the keywords or phrases to interpret the decision process of the network for text classification. The selected sentences in Figures 3 to 5 demonstrate the visualization of the important words through calculating the attention weights. Often, we find that when the important words are identified correctly, the classification results are also correct. This shows that the attention layer for the text classification interpretation is effective.

The attention-based models also demonstrate that different embedding mechanisms and classification mechanisms can lead to different results. The capture keywords are different. Some of them work better than the other. Based on the identified frequent keywords of the categories, we conclude that the embedding layer is crucial for the text classification and keyword extraction. We explore the token-based embedding and simple one-hot embedding in this paper. We expect to explore other embeddings in the future, especially the pre-trained embeddings using the biomedical data set, such as Clinical BERT [1] and BioWord2Vec [35].

Vii Conclusion and Future Work

In this paper, we examine three attention-based deep learning models for clinical progress notes classification and keywords extraction. Two of the models based on the most recent language embedding model - BERT, the other one based on a simple one-hot encoder embedding. Although three models gain good performance on progress notes classification, through the attention layer in the three models, we are capable of interpreting the text classification process of the models. Words with high attention weights are the important words that associate with the text categories. This research presents interpretable models for text classification and demonstrates the power of the attention-based approach for model interpretability and evaluation.

The future work includes evaluating the model on different embeddings and considering building attention-based model by incorporating syntactic relationships between words for keyphrases extraction and interpretation.


This research was support by IU Health, Department of CIT at IUPUI, with funding from National Science Foundation and United States Department of Defense. The authors would also like to thank Dr. Feng Li and Sheila Walter for their support.


  • [1] E. Alsentzer, J. R. Murphy, W. Boag, W. Weng, D. Jin, T. Naumann, and M. McDermott (2019) Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323. Cited by: §VI.
  • [2] A. Chatterjee, U. Gupta, M. K. Chinnakotla, R. Srikanth, M. Galley, and P. Agrawal (2019) Understanding emotions in text using deep learning and big data. Computers in Human Behavior 93, pp. 309–317. Cited by: §I.
  • [3] R. Collobert and J. Weston (2008) A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pp. 160–167. Cited by: §I.
  • [4] A. Conneau, H. Schwenk, L. Barrault, and Y. Lecun (2016) Very deep convolutional networks for text classification. arXiv preprint arXiv:1606.01781. Cited by: §II.
  • [5] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §I, §II, §III-A.
  • [6] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Pre-trained bert using tensorflow hub. External Links: Link Cited by: §III-A.
  • [7] V. N. Garla and C. Brandt (2012) Knowledge-based biomedical word sense disambiguation: an evaluation and application to clinical document classification. Journal of the American Medical Informatics Association 20 (5), pp. 882–886. Cited by: §II.
  • [8] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §III-C.
  • [9] J. Howard and S. Ruder (2018) Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Cited by: §I.
  • [10] Z. Huang, W. Xu, and K. Yu (2015) Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Cited by: §I, §III-C.
  • [11] M. Iyer, C. Zou, and X. Luo (2018) Incorporating syntactic dependencies into semantic word vector model for medical text processing. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 659–664. Cited by: §II.
  • [12] T. Joachims (1999) Transductive inference for text classification using support vector machines. In Icml, Vol. 99, pp. 200–209. Cited by: §II.
  • [13] M. F. Kabir, K. Abdullah-Al-Mamun, and M. N. Huda (2016) Deep learning based parts of speech tagger for bengali. In 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), pp. 26–29. Cited by: §I.
  • [14] D. A. Kaji, J. R. Zech, J. S. Kim, S. K. Cho, N. S. Dangayach, A. B. Costa, and E. K. Oermann (2019) An attention based deep learning model of clinical events in the intensive care unit. PloS one 14 (2), pp. e0211057. Cited by: §II.
  • [15] L. Lee, Y. Lu, P. Chen, P. Lee, and K. Shyu (2019)

    NCUEE at mediqa 2019: medical text inference using ensemble bert-bilstm-attention model

    In Proceedings of the 18th BioNLP Workshop and Shared Task, pp. 528–532. Cited by: §II.
  • [16] F. Li, M. Zhang, G. Fu, and D. Ji (2017) A neural joint model for entity and relation extraction from biomedical text. BMC bioinformatics 18 (1), pp. 198. Cited by: §II.
  • [17] J. Li, R. Li, and E. Hovy (2014) Recursive deep models for discourse parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2061–2069. Cited by: §I.
  • [18] P. Liu, X. Qiu, and X. Huang (2016) Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101. Cited by: §II.
  • [19] L. Luo, Z. Yang, P. Yang, Y. Zhang, L. Wang, H. Lin, and J. Wang (2017)

    An attention-based bilstm-crf approach to document-level chemical named entity recognition

    Bioinformatics 34 (8), pp. 1381–1388. Cited by: §II.
  • [20] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Proceedings of the International Conference on Neural Information Processing Systems, pp. 3111–3119. Cited by: §II, §III-A.
  • [21] S. Moen and T. S. S. Ananiadou (2013) Distributional semantics resources for biomedical text processing. In Proceedings of the 5th International Symposium on Languages in Biology and Medicine, Tokyo, Japan, pp. 39–43. Cited by: §II.
  • [22] M. Rogati and Y. Yang (2002)

    High-performing feature selection for text classification

    In Proceedings of the eleventh international conference on Information and knowledge management, pp. 659–661. Cited by: §II.
  • [23] A. Rumshisky, M. Ghassemi, T. Naumann, P. Szolovits, V. Castro, T. McCoy, and R. Perlis (2016) Predicting early psychiatric readmission with natural language processing of narrative discharge summaries. Translational psychiatry 6 (10), pp. e921. Cited by: §II.
  • [24] C. N. d. Santos and V. Guimaraes (2015) Boosting named entity recognition with neural character embeddings. arXiv preprint arXiv:1505.05008. Cited by: §I.
  • [25] S. Tulkens, S. Suster, and W. Daelemans (2016) Using distributed representations to disambiguate biomedical and clinical concepts. In Proceedings of the 15th Workshop on Biomedical Natural Language Processing, Cited by: §II.
  • [26] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §III-A.
  • [27] X. Wang, Y. Zhang, X. Ren, Y. Zhang, M. Zitnik, J. Shang, C. Langlotz, and J. Han (2018) Cross-type biomedical named entity recognition with deep multi-task learning. Bioinformatics 35 (10), pp. 1745–1752. Cited by: §II.
  • [28] G. I. Winata, O. P. Kampman, and P. Fung (2018) Attention-based lstm for psychological stress detection from spoken language using distant supervision. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6204–6208. Cited by: §II.
  • [29] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. (2016)

    Google’s neural machine translation system: bridging the gap between human and machine translation

    arXiv preprint arXiv:1609.08144. Cited by: §III-A.
  • [30] B. Xu, X. Shi, Z. Zhao, and W. Zheng (2018) Leveraging biomedical resources in bi-lstm for drug-drug interaction extraction. IEEE Access 6, pp. 33432–33439. Cited by: §II.
  • [31] G. Xu, C. Wang, and X. He (2018) Improving clinical named entity recognition with global neural attention. In Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM) Joint International Conference on Web and Big Data, pp. 264–279. Cited by: §II.
  • [32] S. Yadav, A. Ekbal, S. Saha, and P. Bhattacharyya (2017) Entity extraction in biomedical corpora: an approach to evaluate word embedding features with pso based feature selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pp. 1159–1170. Cited by: §II.
  • [33] W. Yim, S. W. Kwan, G. Johnson, and M. Yetisgen (2017) Classification of hepatocellular carcinoma stages from free-text clinical and radiology reports. In AMIA Annual Symposium Proceedings, Vol. 2017, pp. 1858. Cited by: §II.
  • [34] L. Zhang, S. Wang, and B. Liu (2018) Deep learning for sentiment analysis: a survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8 (4), pp. e1253. Cited by: §I.
  • [35] Y. Zhang, Q. Chen, Z. Yang, H. Lin, and Z. Lu (2019) BioWordVec, improving biomedical word embeddings with subword information and mesh. Scientific data 6 (1), pp. 52. Cited by: §VI.
  • [36] Z. Zhao, Z. Yang, L. Luo, H. Lin, and J. Wang (2016) Drug drug interaction extraction from biomedical literature using syntax convolutional neural network. Bioinformatics 32 (22), pp. 3444–3453. Cited by: §II.
  • [37] P. Zhou, Z. Qi, S. Zheng, J. Xu, H. Bao, and B. Xu (2016)

    Text classification improved by integrating bidirectional lstm with two-dimensional max pooling

    arXiv preprint arXiv:1611.06639. Cited by: §III-C.
  • [38] Q. Zhu, X. Li, A. Conesa, and C. Pereira (2017) GRAM-cnn: a deep learning approach with local context for named entity recognition in biomedical text. Bioinformatics 34 (9), pp. 1547–1554. Cited by: §II.
  • [39] Y. Zhu, E. Yan, and F. Wang (2017) Semantic relatedness and similarity of biomedical terms: examining the effects of recency, size, and section of biomedical publications on the performance of word2vec. BMC Medical Informatics and Decision Making 17, pp. 95–103. Cited by: §II.