Log In Sign Up

CODER: Knowledge infused cross-lingual medical term embedding for term normalization

by   Zheng Yuan, et al.

We propose a novel medical term embedding method named CODER, which stands for mediCal knOwledge embeDded tErm Representation. CODER is designed for medical term normalization by providing close vector representations for terms that represent the same or similar concepts with multi-language support. CODER is trained on top of BERT (Devlin et al., 2018) with the innovation that token vector aggregation is trained using relations from the UMLS Metathesaurus (Bodenreider, 2004), which is a comprehensive medical knowledge graph with multi-language support. Training with relations injects medical knowledge into term embeddings and aims to provide better normalization performances and potentially better machine learning features. We evaluated CODER in term normalization, semantic similarity, and relation classification benchmarks, which showed that CODER outperformed various state-of-the-art biomedical word embeddings, concept embeddings, and contextual embeddings.


page 1

page 2

page 3

page 4


Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations

Term clustering is important in biomedical knowledge graph construction....

Biomedical term normalization of EHRs with UMLS

This paper presents a novel prototype for biomedical term normalization ...

Understanding Stability of Medical Concept Embeddings: Analysis and Prediction

In biomedical area, medical concepts linked to external knowledge bases ...

Medical Concept Normalization in User Generated Texts by Learning Target Concept Embeddings

Medical concept normalization helps in discovering standard concepts in ...

A multi-perspective combined recall and rank framework for Chinese procedure terminology normalization

Medical terminology normalization aims to map the clinical mention to te...