Biomedical Language Models are Robust to Sub-optimal Tokenization

06/30/2023
by   Bernal Jimenez Gutierrez, et al.
0

As opposed to general English, many concepts in biomedical terminology have been designed in recent history by biomedical professionals with the goal of being precise and concise. This is often achieved by concatenating meaningful biomedical morphemes to create new semantic units. Nevertheless, most modern biomedical language models (LMs) are pre-trained using standard domain-specific tokenizers derived from large scale biomedical corpus statistics without explicitly leveraging the agglutinating nature of biomedical language. In this work, we first find that standard open-domain and biomedical tokenizers are largely unable to segment biomedical terms into meaningful components. Therefore, we hypothesize that using a tokenizer which segments biomedical terminology more accurately would enable biomedical LMs to improve their performance on downstream biomedical NLP tasks, especially ones which involve biomedical terms directly such as named entity recognition (NER) and entity linking. Surprisingly, we find that pre-training a biomedical LM using a more accurate biomedical tokenizer does not improve the entity representation quality of a language model as measured by several intrinsic and extrinsic measures such as masked language modeling prediction (MLM) accuracy as well as NER and entity linking performance. These quantitative findings, along with a case study which explores entity representation quality more directly, suggest that the biomedical pre-training process is quite robust to instances of sub-optimal tokenization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2023

BIOptimus: Pre-training an Optimal Biomedical Language Model with Curriculum Learning for Named Entity Recognition

Using language models (LMs) pre-trained in a self-supervised setting on ...
research
10/12/2020

BioMegatron: Larger Biomedical Domain Language Model

There has been an influx of biomedical domain-specific language models, ...
research
04/03/2019

Probing Biomedical Embeddings from Language Models

Contextualized word embeddings derived from pre-trained language models ...
research
10/22/2020

Self-alignment Pre-training for Biomedical Entity Representations

Despite the widespread success of self-supervised learning via masked la...
research
06/23/2021

Recognising Biomedical Names: Challenges and Solutions

The growth rate in the amount of biomedical documents is staggering. Unl...
research
06/17/2021

Biomedical Interpretable Entity Representations

Pre-trained language models induce dense entity representations that off...
research
07/03/2023

Exploring the In-context Learning Ability of Large Language Model for Biomedical Concept Linking

The biomedical field relies heavily on concept linking in various areas ...

Please sign up or login with your details

Forgot password? Click here to reset