Enhancing Clinical Information Extraction with Transferred Contextual Embeddings

09/15/2021
by   Zimin Wan, et al.
0

The Bidirectional Encoder Representations from Transformers (BERT) model has achieved the state-of-the-art performance for many natural language processing (NLP) tasks. Yet, limited research has been contributed to studying its effectiveness when the target domain is shifted from the pre-training corpora, for example, for biomedical or clinical NLP applications. In this paper, we applied it to a widely studied a hospital information extraction (IE) task and analyzed its performance under the transfer learning setting. Our application became the new state-of-the-art result by a clear margin, compared with a range of existing IE models. Specifically, on this nursing handover data set, the macro-average F1 score from our model was 0.438, whilst the previous best deep learning models had 0.416. In conclusion, we showed that BERT based pre-training models can be transferred to health-related documents under mild conditions and with a proper fine-tuning process.

READ FULL TEXT
research
12/15/2022

The Effects of In-domain Corpus Size on pre-training BERT

Many prior language modeling efforts have shown that pre-training on an ...
research
02/22/2019

Enhancing Clinical Concept Extraction with Contextual Embedding

Neural network-based representations ("embeddings") have dramatically ad...
research
09/08/2019

Multi-Task Bidirectional Transformer Representations for Irony Detection

Supervised deep learning requires large amounts of training data. In the...
research
06/21/2022

NorBERT: NetwOrk Representations through BERT for Network Analysis and Management

Deep neural network models have been very successfully applied to Natura...
research
10/31/2022

Improving Cause-of-Death Classification from Verbal Autopsy Reports

In many lower-and-middle income countries including South Africa, data a...
research
04/06/2023

Deep Learning for Opinion Mining and Topic Classification of Course Reviews

Student opinions for a course are important to educators and administrat...
research
08/17/2022

Boosting Distributed Training Performance of the Unpadded BERT Model

Pre-training models are an important tool in Natural Language Processing...

Please sign up or login with your details

Forgot password? Click here to reset