An Empirical Study of Multi-Task Learning on BERT for Biomedical Text Mining

05/06/2020
by   Yifan Peng, et al.
0

Multi-task learning (MTL) has achieved remarkable success in natural language processing applications. In this work, we study a multi-task learning model with multiple decoders on varieties of biomedical and clinical natural language processing tasks such as text similarity, relation extraction, named entity recognition, and text inference. Our empirical results demonstrate that the MTL fine-tuned models outperform state-of-the-art transformer models (e.g., BERT and its variants) by 2.0 respectively. Pairwise MTL further demonstrates more details about which tasks can improve or decrease others. This is particularly helpful in the context that researchers are in the hassle of choosing a suitable model for new problems. The code and models are publicly available at https://github.com/ncbi-nlp/bluebert

READ FULL TEXT
research
02/03/2023

Bioformer: an efficient transformer language model for biomedical text mining

Pretrained language models such as Bidirectional Encoder Representations...
research
08/10/2023

LASIGE and UNICAGE solution to the NASA LitCoin NLP Competition

Biomedical Natural Language Processing (NLP) tends to become cumbersome ...
research
02/20/2019

ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing

Despite recent advances in natural language processing, many statistical...
research
11/14/2018

A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks

Much efforts has been devoted to evaluate whether multi-task learning ca...
research
01/15/2020

Towards reliable named entity recognition in the biomedical domain

Motivation Automatic biomedical named entity recognition (BioNER) is ...
research
09/21/2022

Extreme Multi-Domain, Multi-Task Learning With Unified Text-to-Text Transfer Transformers

Text-to-text transformers have shown remarkable success in the task of m...
research
02/14/2020

HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing

Computation-intensive pretrained models have been taking the lead of man...

Please sign up or login with your details

Forgot password? Click here to reset