Improving Biomedical Pretrained Language Models with Knowledge

by   Zheng Yuan, et al.

Pretrained language models have shown success in many natural language processing tasks. Many works explore incorporating knowledge into language models. In the biomedical domain, experts have taken decades of effort on building large-scale knowledge bases. For example, the Unified Medical Language System (UMLS) contains millions of entities with their synonyms and defines hundreds of relations among entities. Leveraging this knowledge can benefit a variety of downstream tasks such as named entity recognition and relation extraction. To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases. Specifically, we extract entities from PubMed abstracts and link them to UMLS. We then train a knowledge-aware language model that firstly applies a text-only encoding layer to learn entity representation and applies a text-entity fusion encoding to aggregate entity representation. Besides, we add two training objectives as entity detection and entity linking. Experiments on the named entity recognition and relation extraction from the BLURB benchmark demonstrate the effectiveness of our approach. Further analysis on a collected probing dataset shows that our model has better ability to model medical knowledge.


page 1

page 2

page 3

page 4


BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model

Pretrained language models have served as important backbones for natura...

Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text

Named entity disambiguation (NED), which involves mapping textual mentio...

Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries

Pretrained language models have been suggested as a possible alternative...

Bio-YODIE: A Named Entity Linking System for Biomedical Text

Ever-expanding volumes of biomedical text require automated semantic ann...

Knowledge-Based Biomedical Word Sense Disambiguation with Neural Concept Embeddings

Biomedical word sense disambiguation (WSD) is an important intermediate ...

SpaBERT: A Pretrained Language Model from Geographic Data for Geo-Entity Representation

Named geographic entities (geo-entities for short) are the building bloc...

Extracting Semantics from Maintenance Records

Rapid progress in natural language processing has led to its utilization...

Code Repositories


Improving Biomedical Pretrained Language Models with Knowledge [BioNLP 2021]

view repo

Please sign up or login with your details

Forgot password? Click here to reset