Token Classification for Disambiguating Medical Abbreviations

10/05/2022
by   Mucahit Cevik, et al.
0

Abbreviations are unavoidable yet critical parts of the medical text. Using abbreviations, especially in clinical patient notes, can save time and space, protect sensitive information, and help avoid repetitions. However, most abbreviations might have multiple senses, and the lack of a standardized mapping system makes disambiguating abbreviations a difficult and time-consuming task. The main objective of this study is to examine the feasibility of token classification methods for medical abbreviation disambiguation. Specifically, we explore the capability of token classification methods to deal with multiple unique abbreviations in a single text. We use two public datasets to compare and contrast the performance of several transformer models pre-trained on different scientific and medical corpora. Our proposed token classification approach outperforms the more commonly used text classification models for the abbreviation disambiguation task. In particular, the SciBERT model shows a strong performance for both token and text classification tasks over the two considered datasets. Furthermore, we find that abbreviation disambiguation performance for the text classification models becomes comparable to that of token classification only when postprocessing is applied to their predictions, which involves filtering possible labels for an abbreviation based on the training data.

READ FULL TEXT

page 11

page 12

research
11/25/2022

Comparison Study Between Token Classification and Sequence Classification In Text Classification

Unsupervised Machine Learning techniques have been applied to Natural La...
research
09/22/2021

BFClass: A Backdoor-free Text Classification Framework

Backdoor attack introduces artificial vulnerabilities into the model by ...
research
03/12/2021

Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models' Transferability

In this paper, we investigate whether the power of the models pre-traine...
research
07/16/2017

End-to-End Information Extraction without Token-Level Supervision

Most state-of-the-art information extraction approaches rely on token-le...
research
10/06/2022

Augmentor or Filter? Reconsider the Role of Pre-trained Language Model in Text Classification Augmentation

Text augmentation is one of the most effective techniques to solve the c...
research
10/21/2020

Quasi Error-free Text Classification and Authorship Recognition in a large Corpus of English Literature based on a Novel Feature Set

The Gutenberg Literary English Corpus (GLEC) provides a rich source of t...
research
11/24/2020

Neural Text Classification by Jointly Learning to Cluster and Align

Distributional text clustering delivers semantically informative represe...

Please sign up or login with your details

Forgot password? Click here to reset