Exploiting Task-Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion

04/11/2018 ∙ by Yue Liu, et al. ∙ Rensselaer Polytechnic Institute Peking University Icahn School of Medicine at Mount Sinai 0

In the medical domain, identifying and expanding abbreviations in clinical texts is a vital task for both better human and machine understanding. It is a challenging task because many abbreviations are ambiguous especially for intensive care medicine texts, in which phrase abbreviations are frequently used. Besides the fact that there is no universal dictionary of clinical abbreviations and no universal rules for abbreviation writing, such texts are difficult to acquire, expensive to annotate and even sometimes, confusing to domain experts. This paper proposes a novel and effective approach -- exploiting task-oriented resources to learn word embeddings for expanding abbreviations in clinical notes. We achieved 82.27% accuracy, close to expert human performance.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Abbreviations and acronyms appear frequently in the medical domain. Based on a popular online knowledge base, among the 3,096,346 stored abbreviations, 197,787 records are medical abbreviations, ranked first among all ten domains.111www.allacronyms.com An abbreviation can have over 100 possible explanations222www.allacronyms.com/_medical/HD

even within the medical domain. Medical record documentation, the authors of which are mainly physicians, other health professionals, and domain experts, is usually written under the pressure of time and high workload, requiring notation to be frequently compressed with shorthand jargon and acronyms. This is even more evident within intensive care medicine, where it is crucial that information is expressed in the most efficient manner possible to provide time-sensitive care to critically ill patients, but can result in code-like messages with poor readability. For example, given a sentence written by a physician with specialty training in critical care medicine, “STAT TTE c/w RVS. AKI - no CTA. .. etc”, it is difficult for non-experts to understand all abbreviations without specific context and/or knowledge. But when a doctor reads this, he/she would know that although “STAT” is widely used as the abbreviation of “statistic”, “statistics” and “statistical” in most domains, in hospital emergency rooms, it is often used to represent “immediately”. Within the arena of medical research, abbreviation expansion using a natural language processing system to automatically analyze clinical notes may enable knowledge discovery (e.g., relations between diseases) and has potential to improve communication and quality of care.

Figure 1: Sample Input and Output of the task and intuition of distributional similarity

In this paper, we study the task of abbreviation expansion in clinical notes. As shown in Figure 1, our goal is to normalize all the abbreviations in the intensive care unit (ICU) documentation to reduce misinterpretation and to make the texts accessible to a wider range of readers. For accurately capturing the semantics of an abbreviation in its context, we adopt word embedding, which can be seen as a distributional semantic representation and has been proven to be effective [Mikolov et al.2013] to compute the semantic similarity between words based on the context without any labeled data. The intuition of distributional semantics [Harris1954] is that if two words share similar contexts, they should have highly similar semantics. For example, in Figure 1, “RF” and “respiratory failure” have very similar contexts so that their semantics should be similar. If we know “respiratory failure” is a possible candidate expansion of “RF” and its semantics is similar to the “RF” in the intensive care medicine texts, we can determine that it should be the correct expansion of “RF”. Due to the limited resource of intensive care medicine texts where full expansions rarely appear, we exploit abundant and easily-accessible task-oriented resources to enrich our dataset for training embeddings. To the best of our knowledge, we are the first to apply word embeddings to this task. Experimental results show that the embeddings trained on the task-oriented corpus are much more useful than those trained on other corpora. By combining the embeddings with domain-specific knowledge, we achieve 82.27% accuracy, which outperforms baselines and is close to human’s performance.

2 Related Work

The task of abbreviation disambiguation in biomedical documents has been studied by various researchers using supervised machine learning algorithms

[Liu et al.2004, Gaudan et al.2005, Yu et al.2006, Ucgun et al.2006, Stevenson et al.2009]. However, the performance of these supervised methods mainly depends on a large amount of labeled data which is extremely difficult to obtain for our task since intensive care medicine texts are very rare resources in clinical domain due to the high cost of de-identification and annotation. Tengstrand et al. tengstrand2014eacl proposed a distributional semantics-based approach for abbreviation expansion in Swedish but they focused only on expanding single words and cannot handle multi-word phrases. In contrast, we use word embeddings combined with task-oriented resources and knowledge, which can handle multiword expressions.

3 Approach

3.1 Overview

The overview of our approach is shown in Figure 2. Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base333http://www.allacronyms.com

as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words).

Figure 2: Approach overview.

3.2 Training embeddings with task oriented resources

Given an abbreviation as input, we expect the correct expansion to be the most semantically similar to the abbreviation, which requires the abbreviation and the expansion share similar contexts. For this reason, we exploit rich task-oriented resources such as the Wikipedia articles of all the possible candidates, research papers and books written by the intensive care medicine fellows. Together with our clinical notes data which functions as a corpus, we train word embeddings since the expansions of abbreviations in the clinical notes are likely to appear in these resources and also share the similar contexts to the abbreviation’s contexts.

3.3 Handling MultiWord Phrases

In most cases, an abbreviation’s expansion is a multi-word phrase. Therefore, we need to obtain the phrase’s embedding so that we can compute its semantic similarity to the abbreviation.

It is proven that a phrase’s embedding can be effectively obtained by summing the embeddings of words contained in the phrase [Mikolov et al.2013, Socher et al.2013]. For computing a phrase’s embedding, we formally define a candidate as a list of the words contained in the candidate, for example: one of MICU’s candidate expansions is medical intensive care unit=[medical,intensive,care,unit]. Then, ’s embedding can be computed as follows:


where is a token in the candidate and denotes the embedding of a word/phrase, which is a vector of real-value entries.

3.4 Expansion Candidate Ranking

Even though embeddings are very helpful to compute the semantic similarity between an abbreviation and a candidate expansion, in some cases, context-independent information is also useful to identify the correct expansion. For example, CHF in the clinical notes usually refers to “congestive heart failure”. By using embedding-based semantic similarity, we can find two possible candidates – “congestive heart failure” (similarity=0.595) and “chronic heart failure”(similarity=0.621). These two candidates have close semantic similarity score but their popularity scores in the medical domain are quite different – the former has a rating score444All the rating information in this paper is from http://www.allacronyms.com. On this website, users are free to rate expansions of an abbreviation if they like the expansions. In general, a popular expansion has a high rating score. of 50 while the latter only has a rating score of 7. Therefore, we can see that the rating score, which can be seen as a kind of domain-specific knowledge, can also contribute to the candidate ranking.

We combine semantic similarity with rating information. Formally, given an abbreviation ’s candidate list , we rank based on the following formula:


where denotes the rating of this candidate as an expansion of the abbreviation , which reflects this candidate’s popularity, denotes the embedding of a word. The parameter serves to adjust the weights of similarity and popularity555In the experiments, is empirically tuned to 0.2 on a separate development set.

4 Experiment Results

4.1 Data and Evaluation Metrics

The clinical notes we used for the experiment are provided by domain experts, consisting of 1,160 physician logs of Medical ICU admission requests at a tertiary care center affiliated to Mount Sanai. Prospectively collected over one year, these semi-structured logs contain free-text descriptions of patients’ clinical presentations, medical history, and required critical care-level interventions. We identify 818 abbreviations and find 42,506 candidates using domain-specific knowledge (i.e., www.allacronym.com/_medical). The enriched corpus contains 42,506 Wikipedia articles, each of which corresponds to one candidate, 6 research papers and 2 critical care medicine textbooks, besides our raw ICU data.

We use word2vec [Mikolov et al.2013] to train the word embeddings. The dimension of embeddings is empirically set to 100.

Since the goal of our task is to find the correct expansion for an abbreviation, we use accuracy as a metric to evaluate the performance of our approach. For ground-truth, we have 100 physician logs which are manually expanded and normalized by one of the authors Dr. Mathews, a well-trained domain expert, and thus we use these 100 physician logs as the test set to evaluate our approach’s performance.

4.2 Baseline Models

For our task, it’s difficult to re-implement the supervised methods as in previous works mentioned since we do not have sufficient training data. And a direct comparison is also impossible because all previous work used different data sets which are not publicly available. Alternatively, we use the following baselines to compare with our approach.

  • Rating: This baseline model chooses the highest rating candidate expansion in the domain specific knowledge base.

  • Raw Input embeddings: We trained word embeddings only from the 1,160 raw ICU texts and we choose the most semantically related candidate as the answer.

  • General embeddings: Different from the Raw Input embeddings baseline, we use the embedding trained from a large biomedical data collection that includes knowledge bases like PubMed and PMC and a Wikipedia dump of biomedical related articles [Pyysalo et al.2013] for semantic similarity computation.

4.3 Results

Table 1 shows the performance of abbreviation expansion. Our approach significantly outperforms the baseline methods and achieves 82.27% accuracy.

Approaches Accuracy
Rating 21.32%
Raw input embeddings 26.45%
General embeddings 28.06%
Our Approach 82.27%
Table 1: Overall performance

Figure 3 shows how our approach improves the performance of a rating-based approach. By using embeddings, we can learn that the meaning of “OD” used in our test cases should be “overdose” rather than “out-of-date” and this semantic information largely benefits the abbreviation expansion model.

  • ‘OD’- rating-based: [‘out-of-date’, ‘other diseases’, ‘on duty’, ‘once daily’, ‘optometry degree’, ‘organ donation’, ‘overdose’, ‘optic disc’ … etc.]

  • ‘OD’- our approach: [‘overdose’, ‘osteochondritis dissecans’, ‘optic disc’ … … etc.]

Figure 3: Ranking lists of expansions of “OD” by the rating-based method, our approach

Compared with our approach, embeddings trained only from the ICU texts do not significantly contribute to the performance over the rating baseline. The reason is that the size of data for training the embeddings is so small that many candidate expansions of abbreviations do not appear in the corpus, which results in poor performance. It is notable that general embeddings trained from large biomedical data are not effective for this task because many abbreviations within critical care medicine appear in the biomedical corpus with different senses.

  • Output of general Embeddings on abbreviation ‘OD’: [‘O.D.’, ‘optical density’, ‘OD450’, ‘O.D’, ‘OD570’, ‘absorbance’, ‘OD490’, ‘600nm’ … etc.]

Figure 4: The output of general embeddings trained on large biomedical texts

For example, “OD” in intensive care medicine texts refers to “overdose” while in the PubMed corpus it usually refers to “optical density”, as shown in Figure 4. Therefore, the embeddings trained from the PubMed corpus do not benefit the expansion of abbreviations in the ICU texts.

Moreover, we estimated human performance for this task, shown in Table

2. Note that the performance is estimated by one of the authors Dr. Mathews who is a board-certified pulmonologist and critical care medicine specialist based on her experience and the human’s performance estimated in Table 2 is under the condition that the participants can not use any other external resources. We can see that our approach can achieve a performance close to domain experts and thus it is promising to tackle this challenge.

Groups Accuracy
General readers 40%
Nurses 40%
Mid-level provider (nurse practitioner or physician associate) 70%
General practicing physician 80%
Domain experts with additional training in Emergency Medicine or Critical Care Medicine 90%
Table 2: Estimated human performance for abbreviation expansion

4.4 Error Analysis

The distribution of errors is shown in Table 3. There are mainly three reasons that cause the incorrect expansion. In some cases, some certain abbreviations do not exist in the knowledge base. In this case we would not be able to populate the corresponding candidate list. Secondly, in many cases although we have the correct expansion in the candidate list, it’s not ranked as the top one due to the lower semantic similarity because there are not enough samples in the training data. Among all the incorrect expansions in our test set, such kind of errors accounted for about 54%. One possible solution may be adding more effective data to the embedding training, which means discovering more task-oriented resources. In a few cases, we failed to identify some abbreviations because of their complicated representations. For example, we have the following sentence in the patient’s notes: “ No n/v/f/c.” and the correct expansion should be “No nausea/vomiting/fever/chills.” Such abbreviations are by far the most difficult to expand in our task because they do not exist in any knowledge base and usually only occur once in the training data.

Type of error Percentage
Out of Vocabulary 27%
Lack of training samples 54%
Unidentified representation 19%
Table 3: Error distribution

5 Conclusions and Future Work

This paper proposes a simple but novel approach for automatic expansion of abbreviations. It achieves very good performance without any manually labeled data. Experiments demonstrate that using task-oriented resources to train word embeddings is much more effective than using general or arbitrary corpus.

In the future, we plan to collectively expand semantically related abbreviations co-occurring in a sentence. In addition, we expect to integrate our work into a natural language processing system for processing the clinical notes for discovering knowledge, which will largely benefit the medical research.


This work is supported by RPI’s Tetherless World Constellation, IARPA FUSE Numbers D11PC20154 and J71493 and DARPA DEFT No. FA8750-13-2-0041. Dr. Mathews’ effort is supported by Award #1K12HL109005-01 from the National Heart, Lung, and Blood Institute (NHLBI). The content is solely the responsibility of the authors and does not necessarily represent the official views of NHLBI, the National Institutes of Health, IARPA, or DARPA.


  • [Gaudan et al.2005] Sylvain Gaudan, Harald Kirsch, and Dietrich Rebholz-Schuhmann. 2005. Resolving abbreviations to their senses in medline. Bioinformatics, 21(18):3658–3664.
  • [Harris1954] Zellig S Harris. 1954. Distributional structure. Word.
  • [Liu et al.2004] Hongfang Liu, Virginia Teller, and Carol Friedman. 2004. A multi-aspect comparison study of supervised word sense disambiguation. Journal of the American Medical Informatics Association, 11(4):320–331.
  • [Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119.
  • [Pyysalo et al.2013] Sampo Pyysalo, Filip Ginter, Hans Moen, Tapio Salakoski, and Sophia Ananiadou. 2013. Distributional semantics resources for biomedical text processing. Proceedings of Languages in Biology and Medicine.
  • [Socher et al.2013] Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013.

    Reasoning with neural tensor networks for knowledge base completion.

    In Advances in Neural Information Processing Systems, pages 926–934.
  • [Stevenson et al.2009] Mark Stevenson, Yikun Guo, Abdulaziz Al Amri, and Robert Gaizauskas. 2009. Disambiguation of biomedical abbreviations. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing, pages 71–79. Association for Computational Linguistics.
  • [Tengstrand et al.2014] Lisa Tengstrand, Beáta Megyesi, Aron Henriksson, Martin Duneld, and Maria Kvist. 2014. Eacl-expansion of abbreviations in clinical text. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)@ EACL, pages 94–103.
  • [Ucgun et al.2006] Irfan Ucgun, Muzaffer Metintas, Hale Moral, Fusun Alatas, Huseyin Yildirim, and Sinan Erginel. 2006. Predictors of hospital outcome and intubation in copd patients admitted to the respiratory icu for acute hypercapnic respiratory failure. Respiratory medicine, 100(1):66–74.
  • [Yu et al.2006] Hong Yu, Won Kim, Vasileios Hatzivassiloglou, and John Wilbur. 2006. A large scale, corpus-based approach for automatically disambiguating biomedical abbreviations. ACM Transactions on Information Systems (TOIS), 24(3):380–404.