Snomed2Vec: Random Walk and Poincaré Embeddings of a Clinical Knowledge Base for Healthcare Analytics

07/19/2019 ∙ by Khushbu Agarwal, et al. ∙ University of Massachusetts Amherst PNNL Stanford University 0

Representation learning methods that transform encoded data (e.g., diagnosis and drug codes) into continuous vector spaces (i.e., vector embeddings) are critical for the application of deep learning in healthcare. Initial work in this area explored the use of variants of the word2vec algorithm to learn embeddings for medical concepts from electronic health records or medical claims datasets. We propose learning embeddings for medical concepts by using graph-based representation learning methods on SNOMED-CT, a widely popular knowledge graph in the healthcare domain with numerous operational and research applications. Current work presents an empirical analysis of various embedding methods, including the evaluation of their performance on multiple tasks of biomedical relevance (node classification, link prediction, and patient state prediction). Our results show that concept embeddings derived from the SNOMED-CT knowledge graph significantly outperform state-of-the-art embeddings, showing 5-6x improvement in "concept similarity" and 6-20% improvement in patient diagnosis.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Nowadays, health informatics applications in information retrieval, question answering, diagnostic support, and predictive analytics aim at leveraging deep learning methods. Efficient approaches to learn continuous vector representations from discrete inputs (e.g., diagnostic or drug codes) are required to implement most deep learning workflows.

1.1. Related work

Early work on representation learning of medical concepts focused on extending widely popular skip-gram based models to medical text corpora (Miñarro-Giménez et al., 2015) and clinical text (De Vine et al., 2014).

More recently, Choi et al (Choi et al., 2016) proposed learning concept representations from datasets of longitudinal electronic health records (EHR) using temporal and co-occurrence information from patients’ doctor visits. The resulting embeddings, referred to as Med2Vec, are available for nearly 27k ICD-9 codes. The CUI2Vec algorithm (Beam et al., 2018) learns embeddings from a combination of medical text, EHR datasets, and clinical notes. Concepts in each of these data sources are mapped to a single thesaurus from the Unified Medical Language System (UMLS), and a concept co-occurrence matrix is constructed from multiple data sources and used to learn the embeddings. Despite recent advances, several factors limit the use and adoption of these embeddings. For instance, most EHR or claims datasets are of limited distribution and accessibility. Without open access to the original data, it is difficult to reproduce and understand how derived embeddings were shaped by the coverage of medical concepts in the datasets as well as by algorithmic decisions. The main contributions of this work are:

  1. [leftmargin=*]

  2. We learn vector representations for medical concepts by using graph-embedding learning methods such as node2vec (exploiting random walk-based connectivity), metapath2vec (emphasizing multi-relational properties), and Poincaré (learning taxonomical hierarchies in hyperbolic space) on SNOMED-CT (Donnelly, 2006). SNOMED-CT/UMLS is a publicly available open resource that provides coverage for over 300,000 clinical concepts and facilitates interoperability and reproducibility.

  3. Our experiments suggest that SNOMED-derived embeddings outperform Med2vec and Cui2vec on multiple machine-learning tasks that are critical for healthcare applications.

  4. Our code and pre-trained embeddings for over 300,000 medical concepts are available as an open-source resource for downstream applications and for reproducibility of current results.111Available at https://gitlab.com/agarwal.khushbu/Snomed2Vec.

Figure 1. Schema of the subset of SNOMED-CT knowledge graph extracted from UMLS. ‘n’ denotes the number of concepts of each type and |E| indicates unique number of relationships types between the concept pairs.

The paper is organized as follows. Section 2 provides a quick overview of SNOMED. Section 3 describes the graph-based embedding learning approaches and healthcare analytics tasks used in our empirical studies. Sections 4 and 5 describe the experimental setup and provide conclusions from the empirical evaluation of various embedding learning methodologies.

2. Background

2.1. SNOMED: overview and applications

SNOMED Clinical Terms (CT) is the most widely used clinical terminology by healthcare researchers and practitioners in the world, providing codes, terms, synonyms and definitions used for documentation and reporting within health systems (Donnelly, 2006). It is based on UMLS, which maps terms across sixty controlled vocabularies, and hence is a critical resource for data scientists seeking to fuse heterogeneous types of observational data from the same source (e.g., drugs and disorders) or to combine disparate data sources of the same type (e.g., patient data from two discrete EHR installations) into a common ontological framework. Figure 1 shows a simplified view of SNOMED-CT concept model, demonstrating clinical concepts of interest and the accompanying relationships. Figure 2 shows the illustration of node and some of it’s associated relations in the SNOMED knowledge graph.

Figure 2. Illustration of semantic types and relationships in SNOMED knowledge graph (source: https://confluence.ihtsdotools.org.

3. Representation learning methods and evaluation tasks

3.1. Representation learning

Given a graph , representation learning algorithms map each node to a real-valued vector in a low-dimensional space , such that , where .

Embeddings in Euclidean space. Representation learning algorithms for networks can be broadly separated into two groups based on their reliance on matrix factorization versus random walks. Random walk-based methods, such as DeepWalk (Perozzi et al., 2014) and Node2vec (Grover & Leskovec, 2016)

, try to learn representations that roughly minimize a cross-entropy loss function of the form

, where

is the probability of visiting a node

on a random walk of length starting from node . Algorithms similar to Node2vec are further extended by Metapath2vec (Dong et al., 2017) to incorporate multi-relational properties by constraining random walks.

Embeddings in Hyperbolic space Many real-world datasets exhibit hierarchical structure. Recently, hyperbolic spaces have been advocated as alternatives to the standard Euclidean spaces in order to better represent the hierarchical structure (Nickel & Kiela, 2017; Dhingra et al., 2018). The Poincaré ball model is a popular way to model a hyperbolic space within more familiar Euclidean space. The distance between two points and within the Poincaré ball is given as,

(1)

Given as the set of negative samples for an entity , the loss function maximizing the distance between unrelated samples is,

(2)
Figure 3. Visualization of the SNOMED-X graph embeddings (=500) learned by Node2vec (top left), Metapath2vec (middle) and Poincaré (right). The shape of the visualizations demonstrate the distinct method objective and embedding characteristics (Node2vec: neighbourhood correlations; Metapath2vec: distinct node types; Poincare: hierarchical relations)

3.2. Evaluation tasks

We use a suite of knowledge graph-based healthcare application tasks to evaluate the learned concept representations.

Graph Quality Evaluation Tasks

  1. Multi Label Classification: We test the accuracy of learned embeddings in capturing a node’s type. Each node is assigned a semantic type from a finite set

    , and a classifier model is trained on the partial set of node embeddings to predict the labels for remaining nodes in the test set.

  2. Link prediction

    : We test the accuracy of learned embeddings in identifying the presence or absence of an edge in the knowledge graph. A classifier model is trained on the cosine similarity of the embeddings to predict the existence of a relation between concept pairs.

Domain Tasks

  1. Concept similarity: We follow the benchmark generation strategy and cosine similarity measure outlined by (Beam et al., 2018; Choi et al., 2016) based on statistical power to evaluate “relatedness” in similar concepts. For a given relationship , a null distribution of cosine similarities is computed using bootstrapping samples , where and belong to the same category as and , respectively. Then, the statistical power

    of the bootstrap distribution is reported to reject the null hypothesis (i.e., no relationship).

    Figure 4. Architecture of the deep learning model to predict diagnostic codes from past EHR information
  2. Patient state prediction Given a set of diagnosis codes for visits, , , …, , we predict the set of diagnosis codes for the visit, . Following (Choi et al., 2017)

    , we train a long short term memory (LSTM) model using cross-entropy loss function to capture patient state transitions over several visits (Fig.

    4). Given as vector representation of patient states for timesteps and W representing the weights of the single LSTM layer, equation 3 and 4 describe our prediction task.

    (3)
    (4)

    We characterize the patient state prediction as a set of three different objective functions: 1) Predict all diagnosis codes for the next visit, 2) Predict frequent diagnosis, and 3) Predict rare diagnosis (occurred in at least in 100 visits).

4. Experiments

4.1. Dataset description

We start from the UMLS semantic network (McCray, 1989) and select the subset of clinical concepts of relevance to patient level modeling, limiting to the following semantic groups : Anatomy (ANAT), Chemicals & Drugs (CHEM), Disorders (DISO), and Procedures (PROC).222The list of selected concepts is available from https://anonymous.4open.science/r/0651fc32-8eff-4454-b537-5c00bb12ea19/. We used MRSTY.RRF, MRCONSO.RRF, and MRREL.RRF tables from the UMLS for this purpose. MRSTY.RRF was used to define the semantic type of extracted concepts. Finally, MRREL.RRF provides the relations between the selected concepts. We subsequently refer to the extracted graph as SNOMED-X. Figure 1 shows the number of concepts in each concept group and the number of unique relation types between them.

Node2vec Metapath2vec Poincare CUI2vec Med2vec
Node Classification 0.817 0.3287 0.8579 0.5685 0.0409
Link Prediction 0.986 0.3988 0.7135 0.7222 0.8665
Concept Similarity (D1) 0.79 0.3 0.7 0.16 NA
Concept Similarity (D3) 0.90 0.46 0.31 0.15 NA
Concept Similarity (D5) 0.81 -0.32 -0.06 -0.01 NA
Patient State Prediction (All Diagnosis) 0.3938 0.3359 0.4197 0.3948 0.3881
Patient State Prediction (Frequent 20) 0.8465 0.9749 0.85 0.8035 0.7980
Patient State Prediction (Rare 20) 0.018 0.001 0.019 0.019 0.011
Table 1. Performance evaluation of embeddings on each task. Best performing method is highlighted for each task (using data from best performing embedding size for each method). Evaluation results show that knowledge graph-based embeddings outperform the state of art (Med2vec and CUI2vec) on all tasks.

4.2. Experimental setup for embedding learning

We generated embeddings using the three methods (Fig. 3) (Node2vec, Metapath2vec and Poincarè) for dimensions (20, 50, 100, 200, 500).

Node2vec For Node2vec(Grover & Leskovec, 2016), we varied the number of walks per node between . Random walk length was varied between

. Other internal parameters, such as batch size and epochs, were set to default values. The best results were obtained at

.

Metapath2vec embeddings require specification and sampling specific instances of desired path patterns or “meta-paths" in the graph(Dong et al., 2017). We used patterns that connect nodes in disease category to drugs and set random walk parameters to W=20 and L = 5.

Poincaré We computed Poincaré embeddings using an implementation of (Nickel & Kiela, 2017) included in the gensim software (Řehůřek & Sojka, 2010)

framework. The embeddings were trained with L2 regularization for 50 epochs using a learning rate of 0.1 and a burn-in parameter of 10. A non-zero burn-in value initializes the vectors to a uniform distribution and trains with a much reduced learning rate for a few epochs. This yields a better initial angular layout of the vectors and increases the robustness of final vectors to random bad initialization.

CUI2Vec: CUI2vec embeddings (Beam et al., 2018) were made available by the corresponding authors for the purpose of this evaluation. We used 500-dimensional CUI2vec embeddings, covering nearly 108K concepts in the UMLS thesaurus. We used the UMLS mapping of CUI concepts to snomed concept ids (SCUI) for evaluation purposes.

Med2vec : The med2vec (Choi et al., 2016) embeddings are 200-dimensional embeddings available for 27523 ICD_9 concepts. We use the ICD to SNOMED-CT concept map made available by UMLS for mapping ICD_9 to SNOMED concept ids where needed for evaluation.

Figure 5. Left to right: (a) Bootstrap distribution of cosine similarity for each method on D, where the density is calculated using the density

function from the R programming language, and the bandwidth is the standard deviation of the kernel. The narrower curve means more statistical power between the concept pairs. (b) Impact of embedding dimension

on concept similarity and (c) Impact of embedding dimension on patient state prediction

5. Results

Table 1 shows the results for each embedding learning method along with CUI2vec and Med2vec on the evaluation tasks.

Multi Label Classification We use the semantic types provided in the UMLS graph (MRSTY.RRF datafile) for each node as its class label; SNOMED-X had () unique labels, making it a particularly challenging task. We calculate accuracy as the fraction of nodes for which the predicted label matched the ground truth. We observe that Poincaré yielded the best accuracy (), outperforming CUI2vec by 50% and Med2vec by several orders of magnitude. As Poincaré primarily samples a nodeś type hierarchy for learning, it’s highly suited for healthcare applications that need embeddings highly cognizant of node type (Choi et al., 2017).

Link Prediction We use a simple linear SVM classifier model for testing the link prediction ability of learned embeddings. For training, we sampled about 2% of a total 4.7 million relations available in SNOMED-X to restrict the training time to few hours. We further constructed a negative sample (i.e., links that are not present in the graph) of the same count similar to  (Zhang & Chen, 2018). The accuracy is calculated as the fraction of . We observe Node2vec captures maximal correlation between distant concepts, outperforming Med2vec by 15% and CUI2vec by 36% in the link prediction task.

Concept similarity We create five different datasets from SNOMED-X to capture the unique bootstrap distributions between concept groups. D and D represent hierarchical (’isa’) relations for DISO and CHEM concept groups respectively; while D and D used all non hierarchical relations for DISO and CHEM groups, representing relatedness between concepts in same semantic group. D contains “all” relations between DISO and CHEM, representing relatedness across semantic groups. Table 1 shows statistical power for each method. Med2vec is excluded as it did not have enough concept pair samples for any dataset. Poincaré and Node2vec perform well for D and D (Figure 5(a)). However, when the hierarchical relationships are excluded from the benchmarking (D, D, D), the Node2vec method provides larger statistical power, showing 5-6x improvement over CUI2vec.

Patient state prediction The patient model is trained on the openly available MIMIC-III (Johnson AEW & RG, 2016) EHR dataset, which contains ICU visit records for more than than 40,000 patients over 11 years. Each visit of the patient is represented as a collection of ICD_9 diagnosis code, which are mapped to the coarser grained 284 dimensions using a well-organized ontology called Clinical Classifications Software (CCS)(Donnelly, 2006). Table 1 shows the results for all three prediction tasks. We observe that Poincaré-based embeddings outperform other methods for prediction of all diagnosis, showing 6% improvement over CUI2vec (next best performing method). CUI2vec performs equally well as Poincaré on the prediction of rare diagnoses, while Metapath2vec outperforms in predicting the most frequent diagnosis, improving 20% over CUI2vec and Med2vec. Our results are in line with recent work by Choi et al (Choi et al., 2017), which showed that incorporating clinical concept hierarchies improved patient models significantly. Our Poincaré embeddings capture these hierarchical relations and hence are best suited for downstream clinical tasks.

Impact of embedding dimensions: Figure 5 (b) and (c) shows the accuracy of embeddings for different dimensions for concept similarity and patient model respectively. Metapath2vec remains invariable to dimension size, while Poincaré and Node2vec exhibit best performance for =100 for both tasks. The results for other evaluation tasks followed similar trends but have been omitted for brevity.

6. Conclusions

We propose and develop a workflow for learning medical concept representations for a healthcare knowledge graph that is widely-used for building clinical decision support tools in medical practice and in research applications. We evaluate and demonstrate that knowledge graph-driven embedding outperforms state-of-the-art in several healthcare applications. Our analysis suggests that Poincaré-based embeddings learned from hierarchical relations are highly efficient for patient state prediction models and in capturing node type (classification). In addition, Node2vec embeddings are best suited for capturing “relatedness” or concept similarity.

References

  • Beam et al. (2018) A L Beam, B Kompa, I Fried, N P Palmer, X Shi, T Cai, and I S Kohane. Clinical concept embeddings learned from massive sources of medical data. arXiv preprint arXiv:1804.01486, 2018.
  • Choi et al. (2016) Edward Choi, Mohammad Taha Bahadori, Elizabeth Searles, Catherine Coffey, Michael Thompson, James Bost, Javier Tejedor-Sojo, and Jimeng Sun. Multi-layer representation learning for medical concepts. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1495–1504. ACM, 2016.
  • Choi et al. (2017) Edward Choi, Mohammad Taha Bahadori, Le Song, Walter F Stewart, and Jimeng Sun.

    Gram: graph-based attention model for healthcare representation learning.

    In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 787–795. ACM, 2017.
  • De Vine et al. (2014) Lance De Vine, Guido Zuccon, Bevan Koopman, Laurianne Sitbon, and Peter Bruza. Medical semantic similarity with a neural language model. In Proceedings of the 23rd ACM international conference on conference on information and knowledge management, pp. 1819–1822. ACM, 2014.
  • Dhingra et al. (2018) B Dhingra, C J Shallue, M Norouzi, A M Dai, and G E Dahl. Embedding text in hyperbolic spaces. arXiv preprint arXiv:1806.04313, 2018.
  • Dong et al. (2017) Yuxiao Dong, Nitesh V Chawla, and Ananthram Swami. metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 135–144. ACM, 2017.
  • Donnelly (2006) K Donnelly. SNOMED-CT: The advanced terminology and coding system for eHealth. Studies in health technology and informatics, 121:279, 2006.
  • Grover & Leskovec (2016) Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864. ACM, 2016.
  • Johnson AEW & RG (2016) Shen L Lehman L Feng M Ghassemi M Moody B Szolovits P Celi LA Johnson AEW, Pollard TJ and Mark RG. MIMIC-III, a freely accessible critical care database. Scientific Data (2016). DOI: 10.1038/sdata.2016.35. Available at: http://www.nature.com/articles/sdata201635, 2016.
  • McCray (1989) Alexa T McCray. The UMLS semantic network. In Proceedings. Symposium on Computer Applications in Medical Care, pp. 503–507. American Medical Informatics Association, 1989.
  • Miñarro-Giménez et al. (2015) J A Miñarro-Giménez, O Marín-Alonso, and M Samwald. Applying deep learning techniques on medical corpora from the World Wide Web: a prototypical system and evaluation. CoRR, abs/1502.03682, 2015. URL http://arxiv.org/abs/1502.03682.
  • Nickel & Kiela (2017) Maximillian Nickel and Douwe Kiela. Poincaré embeddings for learning hierarchical representations. In Advances in neural information processing systems, pp. 6338–6347, 2017.
  • Perozzi et al. (2014) Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701–710. ACM, 2014.
  • Řehůřek & Sojka (2010) R Řehůřek and P Sojka. Software Framework for Topic Modelling with Large Corpora. In LREC Workshop on New Challenges for NLP Frameworks, 2010.
  • Zhang & Chen (2018) Muhan Zhang and Yixin Chen.

    Link prediction based on graph neural networks.

    In Advances in Neural Information Processing Systems, pp. 5165–5175, 2018.