Incorporating Dictionaries into Deep Neural Networks for the Chinese Clinical Named Entity Recognition

Clinical Named Entity Recognition (CNER) aims to identify and classify clinical terms such as diseases, symptoms, treatments, exams, and body parts in electronic health records, which is a fundamental and crucial task for clinical and translational research. In recent years, deep neural networks have achieved significant success in named entity recognition and many other Natural Language Processing (NLP) tasks. Most of these algorithms are trained end to end, and can automatically learn features from large scale labeled datasets. However, these data-driven methods typically lack the capability of processing rare or unseen entities. Previous statistical methods and feature engineering practice have demonstrated that human knowledge can provide valuable information for handling rare and unseen cases. In this paper, we address the problem by incorporating dictionaries into deep neural networks for the Chinese CNER task. Two different architectures that extend the Bi-directional Long Short-Term Memory (Bi-LSTM) neural network and five different feature representation schemes are proposed to handle the task. Computational results on the CCKS-2017 Task 2 benchmark dataset show that the proposed method achieves the highly competitive performance compared with the state-of-the-art deep learning methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/27/2018

Fast and Accurate Recognition of Chinese Clinical Named Entities with Residual Dilated Convolutions

Clinical Named Entity Recognition (CNER) aims to identify and classify c...
04/26/2019

Neural Chinese Named Entity Recognition via CNN-LSTM-CRF and Joint Training with Word Segmentation

Chinese named entity recognition (CNER) is an important task in Chinese ...
01/06/2019

Named Entity Recognition in Electronic Health Records Using Transfer Learning Bootstrapped Neural Networks

Neural networks (NNs) have become the state of the art in many machine l...
03/28/2022

Using Domain Knowledge for Low Resource Named Entity Recognition

In recent years, named entity recognition has always been a popular rese...
08/24/2017

Combining Discrete and Neural Features for Sequence Labeling

Neural network models have recently received heated research attention i...
08/28/2020

Cost-Quality Adaptive Active Learning for Chinese Clinical Named Entity Recognition

Clinical Named Entity Recognition (CNER) aims to automatically identity ...
05/24/2020

MASK: A flexible framework to facilitate de-identification of clinical texts

Medical health records and clinical summaries contain a vast amount of i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Clinical Named Entity Recognition (CNER) is a critical task for extracting patient information from Electronic Health Records (EHRs) to support clinical and translational research. The main aim of CNER is to identify and classify clinical terms in EHRs, such as diseases, symptoms, treatments, exams, and body parts. There has been much work focused on extracting named entities from clinical texts, mostly because biomedical systems that rely on structured data are unable to access directly such healthcare information locked in the clinical texts, unless after CNER. However, building a CNER is not an easy task because of the richness of EHRs, and CNER in Chinese texts is more difficult compared to those in Romance languages due to the lack of word boundaries in Chinese and the complexity of Chinese composition forms [1].

Recently, along with the development of deep learning methods, some neural network models [2, 3] have also been successfully used for this task. Despite the great success achieved by deep learning methods, some issues still have not been well solved. One significant drawback is that such methods rarely take the integration of human knowledge into account. The deep neural networks usually employ an end-to-end approach and try to directly learn features from large scale labeled data. However, there also exists a huge number of entities that rarely or even do not occur in the training set. Several reasons lie behind this, including use of non-standard abbreviations or acronyms, multiple variations of same entities, etc. Thus, the data-driven deep learning methods usually cannot handle such cases well. While, dictionaries contain both commonly used entities and rare entities. If we can incorporate a dictionary into a deep neural network, rare and unseen clinical named entities can be better processed.

In this paper, we propose a novel method for the Chinese CNER. We extend Bi-directional Long Short-Term Memory and Conditional Random Field (Bi-LSTM-CRF) [4]

to model the CNER task as a character level sequence labeling problem. To integrate dictionaries, we design five different schemes to construct feature vectors for each Chinese character based on dictionaries and contexts. Also, two different architectures are introduced to integrate feature vectors with character embeddings to perform the task. Finally, our proposed approach was extensively evaluated based on a CCKS-2017

111

CCKS: China Conference on Knowledge Graph and Semantic Computing, 2017, and its website:

http://www.ccks2017.com/
benchmark dataset.

The main contributions of this paper can be summarized as follows:

  • To the best of our knowledge, it is the first time to incorporate dictionaries into deep neural networks for CNER tasks. We design two architectures and five feature representation schemes to integrate information extracted from dictionaries into deep neural networks.

  • We assess the performance of the proposed approaches on the CCKS-2017 Task 2 benchmark dataset. The computational results indicate that our proposed approaches perform remarkably well compared to state-of-the-art methods.

The rest of this paper is organized as follows. In the next section, we briefly review the related work of the clinical named entity recognition. Then we introduce the basic Bi-LSTM-CRF model in Section 3. In Section 4, we present our proposed approaches. Section 5 is dedicated to some experimental studies. Finally, conclusions are provided in Section 6.

2 Related work

Due to the practical significance, Clinical Named Entity Recognition (CNER) has attracted considerable research effort, and a great number of solution approaches have been proposed in the literature. Generally, all the existing approaches fall into four categories: rule-based approaches, dictionary-based approaches, statistical machine learning approaches, and recently, deep learning approaches are more investigated in CNER community.

Rule-based approaches rely on heuristics and handcrafted rules to identify entities. They were the dominant approaches in the early CNER systems

[5, 6] and some recent work [7, 8]. However, it is difficult to list all rules to model the structure of clinical named entities, and this kind of handcrafted approaches always leads to a relatively high system engineering cost.

Dictionary-based approaches rely on existing clinical vocabularies to identify entities [9, 10, 11]. They were widely used because of their simplicity and their performance. A dictionary-based CNER system can extract all the matched entities defined in a dictionary from a given clinical text. However, it cannot deal with entities which are not included in the dictionary, and usually causes low recalls.

Statistical machine learning approaches consider CNER as a sequence labeling problem where the goal is to find the best label sequence for a given input sentence [12, 13]

. Typical methods are Hidden Markov Models (HMMs)

[14, 11], Maximum Entropy Markov Models (MEMMs) [15, 16], Conditional Random Fields (CRFs) [17, 18, 19]

, and Support Vector Machines (SVMs)

[20, 21]. However, these statistical methods rely on pre-defined features, which makes their development costly. What’s more, feature engineering, i.e. finding the best set of features which helps to discern entities of a specific type from others is more of an art than a science, incurring extensive trial-and-error experiments.

Recently, deep learning approaches [22]

, especially the methods based on Bidirectional Recurrent Neural Network (RNN) using CRF as the output interface (Bi-RNN-CRF)

[4], achieve state-of-the-art performance in CNER tasks and outperform the traditional statistical models [2, 3, 23]. RNNs with gated recurrent cells, such as Long-Short Term Memory (LSTM) [24]

and Gated Recurrent Units (GRU)

[25], are capable of capturing long dependencies and retrieving rich global information. The sequential CRF on top of the recurrent layers ensures that the optimal sequence of tags over the entire sentence is obtained.

3 Bi-LSTM-CRF model

The Chinese clinical named entity recognition task is usually regarded as a sequence labeling task. Due to the ambiguity in the boundary of Chinese words, following our previous work [26], we label the sequence in the character level to avoid introducing noise caused by segmentation error. Thus, given a clinical sentence , our goal is to label each character in the sentence with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. An example of the tag sequence for “腹平坦,未见腹壁静脉曲张。” (The abdomen is flat and no varicose veins can be seen on the abdominal wall) can be found in Table 1.

Character sequence
tag sequence S-b O O O O O B-b E-b B-s I-s I-s E-s O
PIET features b None None None None None b b s s s s None
PDET features S-b None None None None None B-b E-b B-s I-s I-s E-s None
entity type body body symptom
  • The B-tag indicates the beginning of an entity. The I-tag indicates the inside of an entity. The E-tag indicates the end of an entity. The O-tag indicates the character is outside an entity. The S-tag indicates the character is merely a single-character entity. As for entity types, the b-tag indicates the entity is a body part, and the s-tag indicates the entity is a symptom.

Table 1: An illustrative example of the tag sequence and features.

In this section, we will give a brief description of the general Bi-LSTM-CRF architecture for Chinese CNER. Bi-LSTM-CRF model is originally proposed by Huang et al. [4], the main architecture of which is illustrated in Fig. 1. Unlike Huang et al. [4], we employ character embeddings rather than word embeddings to deal with the ambiguity in the boundary of Chinese words.

Figure 1: Main architecture of the Bi-LSTM-CRF model.

3.1 Embedding layer

Given a clinical sentence , which is a sequence of characters, the first step is to map discrete language symbols to distributed embedding vectors. Formally, we lookup embedding vector from embedding matrix for each character as , where indicates is the -th word in , and is a hyper-parameter indicating the size of character embedding.

3.2 Bi-LSTM layer

The Long Short-Term Memory (LSTM) network [24] is a variant of the Recurrent Neural Network (RNN), which incorporates a gated memory-cell to capture long-range dependencies within the data and is able to avoid gradient vanishing/exploding problems caused by standard RNNs.

For each position , LSTM computes with input and previous state , as:

(1)
(2)
(3)
(4)
(5)
(6)

where , , , are -dimensional hidden state (also called output vector), input gate, forget gate and output gate, respectively; , , , , , , , and , , , are the parameters of the LSTM;

is the sigmoid function, and

denotes element-wise production.

However, the hidden state of LSTM only takes information from past, not considering future information. One solution is to utilize Bidirectional LSTM (Bi-LSTM) [27], which incorporate information from both past and future. Formally, for any given sequence, the network computes both a left, , and a right, , representations of the sequence context at every input, . The final representation is created by concatenating them as:

(7)

The Bi-LSTM along with the embedding layer is the main machinery responsible for learning a good feature representation of the data.

3.3 CRF layer

For the character-based Chinese CNER task, it is beneficial to consider the dependencies of adjacent tags. For example, a B (begin) tag should be followed by an I (middle) tag or E (end) tag, and an I tag cannot be followed by a B tag or S (single) tag. Therefore, instead of making tagging decisions using independently, we employ a Conditional Random Field (CRF) [28] to model the tag sequence jointly.

Generally, the CRF layer is represented by lines which connect consecutive output layers, and has a state transition matrix as parameters. With such a layer, we can efficiently use past and future tags to predict the current tag, which is similar to the use of past and future input features via a Bi-LSTM network. We consider the matrix of scores as the output of the Bi-LSTM network. The element of the matrix is the score output by the network with parameters , for the sentence and for the -th tag, at the -th character. We introduce a transition score to model the transition from -th state to -th for a pair of consecutive time steps. Note that this transition matrix is position independent. We now denote the new parameters for our whole network as . The score of along with a path of tags is then given by the sum of transition scores and Bi-LSTM network scores:

(8)

The conditional probability

is calculated with a softmax function:

(9)

where is the true tag sequence and is the set of all possible output tag sequences.

We use the maximum conditional likelihood estimation to train the model:

(10)

Viterbi algorithm [29], which is a dynamic programming, can be used efficiently to compute and optimal tag sequences for inference.

4 Incorporating dictionaries for Chinese CNER

From the brief description given above, we can observe that the Bi-LSTM-CRF model can learn information from large-scale labeled data. However, it cannot process rare and unseen entities very well. Hence, in this work, inspired by the success of integrating dictionaries into the CRF models for CNER [30, 31], we consider integrating dictionaries into the deep neural networks.

For a given sentence , we first construct feature vector for each character based on dictionary and the context. We propose five feature representation schemes for to represent whether character segments that consist of character and its surroundings are clinical named entities or not. After that, we propose two architectures to integrate the feature vector into the Bi-LSTM-CRF model. We will detail the feature vector construction and integration architectures in the following subsection.

4.1 Feature vector construction

In this section, we propose five different schemes to represent dictionary features. These schemes can be further classified into three categories: n-gram feature, Position-Independent Entity Type (PIET) feature and Position-Dependent Entity Type (PDET) feature.

4.1.1 N-gram feature

Given a sentence and an external dictionary , we construct text segments based on the context of using the pre-defined n-gram feature templates. The templates used in our work are listed in Table 2.

Type template
2-gram ,
3-gram ,
4-gram ,
5-gram ,
Table 2: N-gram feature templates for the -th character, which are used to generate feature vector .

For a text segment that appears in an n-gram feature template, we can generate a binary value to indicate whether the text segment is a clinical named entity in or not. Here we use to represent the binary value of the output corresponding to the -th entity type in -th n-gram template for . For a dictionary with five types of clinical named entities, we finally generate a 40-dimensional feature vector containing entity type and boundary information for . Fig. 2 illustrates an example of n-gram feature vector construction.

Figure 2: Example of n-gram feature vector construction. The character with the red shadow is the character . The character segment with solid rectangle is a body part in the dictionary . Here d, s, t, e, b denote disease, symptom, treatment, exam, and body part, respectively.

4.1.2 Position-Independent Entity Type (PIET) feature

Given a sentence and an external dictionary , we first use the classic Bi-Directional Maximum Matching (BDMM) algorithm [32] to segment . The pseudo code of the BDMM algorithm is provided in Algorithm 1. Then each character is labeled as the type of the entity which belongs to, as shown in the third line of Table 1

. The feature can be further represented in the format of one-hot encoding or feature embedding.

Input: A clinical sentence , a dictionary with different types of clinical named entities and the maximum length of the entities in
Output: The entity list after segmentation
1 begin
2        ;
3        // Direction 1: from the beginning to the end
4        while  is not empty do
5               cut a string of size from ;
6               make a match between and each entity in ;
7               if there is an entity matches  then
8                      split out from ;
9                      add with its type into ;
10                     
11              else
12                      remove a character from the tail of ;
13                      add the character back to ;
14                      make a match between and each entity in ;
15                      if a match is found then
16                             go to line 7;
17                            
18                     if  then
19                             split out from ;
20                             ‘‘None’’;
21                            
22                     else
23                             repeat lines 11-17;
24                     
25              
26       ;
27        // Direction 2: from the end to the beginning
28        build in the same way of ;
29        if  then
30               ;
31              
32       else
33               ;
34              
35       
36
return the entity list after segmentation;
Algorithm 1 The pseudo-code of the BDMM segmentation method

4.1.3 Position-Dependent Entity Type (PDET) feature

PIET feature only considers the type of the entity which a character belongs to. Different from PIET feature, PDET feature also takes the position of a character in an entity into account: If the character is merely a single-character entity, we add a flag “S” before the PIET feature. Otherwise, for the first character of an entity, we add a flag “B” before the PIET feature; For the last character of an entity, we add a flag “E” before the PIET feature; For the middle character(s) of an entity, we add a flag “I” before the PIET feature. The example is shown in the fourth line of Table 1. Similar to PIET feature, PDET feature can also be represented in the format of one-hot encoding or feature embedding.

To some degree, the feature vector, whatever n-gram feature or other entity type feature, can represent the candidate labels of a character based on the given dictionary. The values in the feature vector are dependent on the context and dictionary. They are not impacted by other sentences or statistical information. Hence, feature vectors can provide much information quite different from data-driven methods.

4.2 Integration architecture

Through the construction steps of dictionary feature vectors, given a sentence , we obtain both character embedding and feature vector for each character . As described above, the original Bi-LSTM-CRF model only takes as inputs. Since dictionary features could provide valuable information for CNER, we try to integrate it with the original Bi-LSMT-CRF model. In this section, we will introduce two different architectures to integrate feature vectors with character embeddings.

4.2.1 Model-I

The general architecture of the proposed model is illustrated in Fig. 3. The character embedding and feature vector are first concatenated and then fed into a Bi-LSTM layer:

(11)
(12)

where denotes the embedding vector of , and represents the feature vector of .

The other part of this model uses the same operation as the basic Bi-LSTM-CRF model.

Figure 3: Main architecture of Model-I.

4.2.2 Model-II

The general architecture of the proposed model is illustrated in Fig. 4. The two parallel Bi-LSTMs can extract context information and potential entity information, respectively. For sentence , the hidden states of the two parallel Bi-LSTMs at position can be defined as:

(13)
(14)

where denotes the embedding vector of , and represents the feature vector of . Note that in our formulation, the two parallel Bi-LSTMs are independent, without any shared parameters.

Finally, we concatenate the two hidden states of the parallel Bi-LSTMs as the inputs of the CRF layer:

(15)

The other part of this model is the same as the basic Bi-LSTM-CRF model.

Figure 4: Main architecture of Model-II.

5 Computational studies

The dictionary we exploit in the experiments is constructed according to the lists of charging items and drug information in Shanghai Shuguang Hospital as well as some medical literature such as 《人体解剖学名词(第二版)》 (Chinese Terms in Human Anatomy [Second Edition]).

We use the CCKS-2017 Task 2 benchmark dataset222It is publickly available at http://www.ccks2017.com/en/index.php/sharedtask/ to conduct our experiments. The dataset contains 1,596 annotated instances (10,024 sentences) with five types of clinical named entities, including diseases, symptoms, exams, treatments, and body parts. The annotated instances are already partitioned into 1,198 training instances (7,906 sentences) and 398 test instances (2,118 sentences). Each instance has one or several sentences. We further split these sentences into clauses by commas. The statistics of different types of entities are listed in Table 3.

We use the standard micro-average precision, recall, and F-Measure [33] to evaluate the methods in the following experiments.

Type training set test set
disease 722 553
symptom 7,831 2,311
exam 9,546 3,143
treatment 1,048 465
body part 10,719 3,021
sum 29,866 9,493
Table 3: Statistics of different types of entities.

5.1 Experimental settings

Parameter configuration may influence the performance of a deep neural network. The parameter configurations of the proposed approach are shown in Table 4. Note that in order to avoid material impacts on the experimental results, the hidden unit number of each LSTM in Model-II is set the half of that in Model-I, so that the input length of CRF layer at each time step in Model-II is ensured the same as that in Model-I.

We initialize character embeddings and feature embeddings via word2vec [34] on both the training data and the test data. Dropout technique [35] is applied to the outputs of the Bi-LSTM layers in order to reduce over-fitting. The models are trained by Adam optimization algorithm [36], the parameters of which is the same as the default settings.

Parameters value
character embedding size = 128
feature embedding size = 128
number of LSTM hidden units in Model-I = 256
number of LSTM hidden units in Model-II = = 128
dropout rate = 0.2
batch size = 128
Table 4: Parameter configurations of the proposed approach.

5.2 Comparative results of different combinations between feature representations and integration architectures

In this section, we compare different combinations between three feature representations and three integration architectures. The comparative results are listed in Table 5.

Model-I Model-II
Precision Recall F-Measure Precision Recall F-Measure
N-gram feature 88.39 88.46 88.43 88.72 88.71 88.71
PIET feature one-hot encoding 89.53 90.58 90.05 89.38 90.49 89.93
feature embedding 90.11 90.01 90.56 90.00 90.60 90.30
PDET feature one-hot encoding 90.51 91.04 90.77 90.22 90.64 90.43
feature embedding 90.83 91.64 91.24 90.36 91.35 90.85
Table 5: Comparative results of combinations between different feature representations and integration architectures.

Table 5 displays the computational results of each combination. First of all, Model-I with PDET features achieves the best performance among all the ten models, with in Precision, in Recall, and in F-Measure. Second, as for feature representations, n-gram features perform the worst compared to PIET features and PDET features, because n-gram features only consider the boundary information of potential entities, ignoring characters in the middle of entities. What’s more, PDET features achieve the best results, because this type of features indicate not only the potential type but also the potential boundary of clinical named entities in the dictionary. Further more, as to PIET features and PDET features, feature embedding has better results than one-hot encoding. It is because the dense vector representation can bring more information than one-hot encoding. Third, except using n-gram features, Model-I performs better than Model-II in F-Measure, with an improvement of on average. It indicates that considering characters and their dictionary features together is better than considering them separately.

5.3 Compared with two base models

According to the comparative results of Section 5.2, we know that the model combining Model-I and PDET features achieves the best performance. In this section, we compare the best model (Model-I with PDET features) with two base models, i.e. BDMM algorithm (see Algorithm 1) and the basic Bi-LSTM-CRF model (see Section 3). The comparative results are summarized in Table 6.

Method Precision Recall F-Measure
BDMM algorithm 70.29 84.44 76.72
basic Bi-LSTM-CRF 88.22 88.53 88.38
our best model 90.83 91.64 91.24
Table 6: Comparative results between our best model and two base models.

From Table 6, we clearly observe that our best model performs better than the two base models. It indicates the benefit of the incorporation between a dictionary and a Bi-LSTM-CRF model. BDMM algorithm with dictionaries performs the worst among the three models, and its Precision is far below its Recall. One reason is that applying BDMM algorithm directly may annotate clinical named entities with wrong boundaries. For example, “双侧瞳孔” (both pupils) is a body part in the clinical text, but it is not in the dictionary and the dictionary only has the body part “瞳孔” (pupil) in it, so the entity will be falsely recognized as “瞳孔”. Another reason is type errors that the same entity can correspond to different entity types with different contexts. For example, “维生素 C” (vitamin C) is a drug name in the clause “维生素C注射2g” (inject with 2 grams of vitamin C), while it is also an exam index in the clause “缺乏维生素 C” (lack vitamin C). However, BDMM algorithm can only deal with the situation that an entity is correspond to one entity type.

5.4 Compared with state-of-the-art deep models

In this section, we compare our best model, i.e. Model-I with PDET features, with two state-of-the-art deep models in the literature. Li et al. [37]

see the Chinese CNER task as a sequence labeling problem in word level. They exploit a Bi-LSTM-CRF model to solve the problem. To improve recognition, They also use health domain datasets to create richer, more specialized word embeddings, and utilize external health domain lexicons to help word segmentation. Ouyang

et al. [38] adopts bidirectional RNN-CRF architecture with concatenated n-gram character representation to recognize Chinese clinical named entities. They also incorporate word segmentation results, part-of-speech (POS) tagging and medical vocabulary as features into their model. Hu et al. [39] develop a hybrid system based on rules, CRF and LSTM methods for the CNER task. They also utilize a self-training algorithm on extra unlabeled clinical texts to improve recognition performance. Note that except Li et al. [37], the other systems all regard the CNER task as a character level sequence labeling problem.

The comparative results are shown in Table 7. From the table, we can see that our best model achieves the best results among all the models. Li et al. [37] get the worst performance because their word-level approach inevitably have wrong segmentation, leading to boundary errors when recognition, and the word set is much bigger than the character set, which means the corpus may be not big enough to learn word embeddings effectively. What’s more, Hu et al. [39] utilize three separated models to handle the task, and finally get 91.02% in F-Measure, which is the best one among the previous models, while we only exploit one model and achieve a 0.16% improvement in F-Measure compared with Hu et al. [39]

Methods Precision Recall F-Measure
Li et al. [37] - - 87.95
Ouyang et al. [38] - - 88.85
Hu et al. [39] 94.49 87.79 91.02
Hu et al. [39] 92.99 89.25 91.08
Our best model 90.83 91.64 91.24
  • The results are obtained by allowing the use of external resources for self-training.

Table 7: Comparative results between our best model with state-of-the-art deep models.

5.5 Parameter analysis

In this section, we respectively investigate the impact of the dictionary size and the number of hidden units of LSTM on the performance of our approach. In following experiments, all experimental results are obtained by using our approach in the case of the Model-I and PDET features.

5.5.1 The effect of the dictionary size

We first investigate the effect of the dictionary size. We randomly select , , , and of the entities from the original dictionary to construct new dictionaries with different sizes. Fig. 5 summarizes the computational results.

Figure 5: The impact of the different dictionary size on the performance of our proposed approach in terms of Precision, Recall and F-Measure.

From Fig. 5, we can see that the performance of our proposed approach improves gradually as the dictionary size increases in all three different evaluation measures. In other words, if we build a dictionary containing more words, we can get better results.

5.5.2 The effect of the hidden unit number

We further explore the influence of the hidden unit number of the LSTM, which takes the concatenation of character embeddings and PDET feature vectors as the inputs. In our experiment, is set to 128, 192, 256, 320, and 384, respectively. The results are shown in Fig. 6.

Figure 6: The impact of different number of hidden units on the performance of our proposed approach in terms of Precision, Recall and F-Measure.

From Fig. 6, we can clearly observe that when the value of increases, the performance (in terms of Precision) of our proposed approach first grows a bit, then keeps stable, finally drop down. We can also obtain the same observations on the curves of Recall and -Measure. These observations seem reasonable because a more complex model can obtain more powerful expression ability. However, the complexity of the model should match the amount of the training data. If the model is too complex, it will be subject to over-fitting and its generalization will be poor.

6 Conclusion

Since the previous methods do not take human knowledge into consideration when recognizing clinical named entities. In this paper, we propose effective approaches for the Chinese CNER by integrating dictionaries into neural network. Five different feature representation schemes are designed to extract information based on given dictionaries for clinical texts. Also, two different architectures are introduced to use the information extracted from the dictionaries. Due to dictionaries contain rare and unseen entities, the proposed approaches could process them better than previous methods.

Experimental results on the CCKS-2017 Task 2 benchmark dataset show that incorporated dictionaries could significantly enhance the performance of deep neural network for the Chinese CNER, and achieve highly competitive results compared with state-of-the-art deep neural network methods.

Acknowledgment

This work is supported by the National Natural Science Foundation of China (No. 61772201), Scientific Research Program of Shanghai funded by Shanghai Science and Technology Committee (No. 16511101000), and Science and the Technology Innovation Project of Traditional Chinese Medicine funded by Shanghai Health and Family Planning Commission (No. ZYKC201601013).

References

  • [1] H. Duan, Y. Zheng, A study on features of the CRFs-based chinese named entity recognition, International Journal of Advanced Intelligence Paradigms 3.
  • [2] M. Gridach, Character-level neural network for biomedical named entity recognition, Journal of Biomedical Informatics 70 (2017) 85–91.
  • [3] M. Habibi, L. Weber, M. Neves, D. L. Wiegandt, U. Leser, Deep learning with word embeddings improves biomedical named entity recognition, Bioinformatics 33 (14) (2017) i37–i48.
  • [4] Z. Huang, W. Xu, K. Yu, Bidirectional LSTM-CRF models for sequence tagging, arXiv preprint arXiv:1508.01991.
  • [5] C. Friedman, P. O. Alderson, J. H. M. Austin, J. J. Cimino, S. B. Johnson, A general natural-language text processor for clinical radiology., Journal of the American Medical Informatics Association 1 (2) (1994) 161–174.
  • [6] K. Fukuda, A. Tamura, T. Tsunoda, T. Takagi, Toward information extraction: identifying protein names from biological papers., in: Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 1998, pp. 707–718.
  • [7] Q. T. Zeng, S. Goryachev, S. Weiss, M. Sordo, S. N. Murphy, R. Lazarus, Extracting principal diagnosis, co-morbidity and smoking status for asthma research: evaluation of a natural language processing system, BMC Medical Informatics and Decision Making 6 (1) (2006) 1–9.
  • [8] G. K. Savova, J. J. Masanz, P. V. Ogren, J. Zheng, S. Sohn, K. C. Kipper-Schuler, C. G. Chute, Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications, Journal of the American Medical Informatics Association Jamia 17 (5) (2010) 507.
  • [9] T. C. Rindflesch, L. Tanabe, J. N. Weinstein, L. Hunter, Edgar: extraction of drugs, genes and relations from the biomedical literature, in: Biocomputing 2000, World Scientific, 1999, pp. 517–528.
  • [10] R. Gaizauskas, G. Demetriou, K. Humphreys, Term recognition and classification in biological science journal articles, in: Computional Terminology for Medical & Biological Applications Workshop of the 2nd International Conference on NLP, 2000, pp. 37–44.
  • [11] M. Song, H. Yu, W. Han, Developing a hybrid dictionary-based bio-entity recognition technique, BMC Med. Inf. & Decision Making 15 (S-1) (2015) S9.
  • [12] J. Lei, B. Tang, X. Lu, K. Gao, M. Jiang, H. Xu, A comprehensive study of named entity recognition in chinese clinical text, Journal of the American Medical Informatics Association : JAMIA 21 (5) (2014) 808–814.
  • [13] J. Lei, Named entity recognition in chinese clinical text, UT SBMI Dissertations (Open Access).
  • [14] G. D. Zhou, J. Su, Named entity recognition using an HMM-based chunk tagger, in: Meeting on Association for Computational Linguistics, 2002, pp. 473–480.
  • [15] A. Mccallum, D. Freitag, F. Pereira, Maximum entropy markov models for information extraction and segmentation, Proceedings of the 17th International Conference on Machine Learning (2000) 591–598.
  • [16] J. Finkel, S. Dingare, H. Nguyen, M. Nissim, C. Manning, G. Sinclair, Exploiting context for biomedical entity recognition: from syntax to the web, in: International Joint Workshop on Natural Language Processing in Biomedicine and ITS Applications, 2004, pp. 88–91.
  • [17] A. Mccallum, W. Li, Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons, in: Conference on Natural Language Learning at Hlt-Naacl, 2003, pp. 188–191.
  • [18] B. Settles, Biomedical named entity recognition using conditional random fields and rich feature sets, In Proceedings of COLING 2004, International Joint Workshop On Natural Language Processing in Biomedicine and its Applications (NLPBA) (2004) 104–107.
  • [19] M. Skeppstedt, M. Kvist, G. H. Nilsson, H. Dalianis, Automatic recognition of disorders, findings, pharmaceuticals and body structures from clinical text, Journal of Biomedical Informatics 49 (C) (2014) 148–158.
  • [20] Y. C. Wu, T. K. Fan, Y. S. Lee, S. J. Yen, Extracting named entities using support vector machines, in: International Workshop on Knowledge Discovery in Life Science Literature, 2006, pp. 91–103.
  • [21] Z. Ju, J. Wang, F. Zhu, Named entity recognition from biomedical text using SVM, in: International Conference on Bioinformatics and Biomedical Engineering, 2011, pp. 1–4.
  • [22] Y. Wu, M. Jiang, J. Lei, H. Xu, Named entity recognition in chinese clinical text using deep neural network, Stud Health Technol Inform 216 (2015) 624–628.
  • [23] D. Zeng, C. Sun, L. Lin, B. Liu, LSTM-CRF for drug-named entity recognition, Entropy 19 (6) (2017) 283.
  • [24] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation 9 (8) (1997) 1735–1780.
  • [25] K. Cho, B. V. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio, Learning phrase representations using RNN encoder-decoder for statistical machine translation, arXiv preprint arXiv:1406.1078.
  • [26] Y. Xia, Q. Wang, Clinical named entity recognition: ECUST in the CCKS-2017 shared task 2, in: CEUR Workshop Proceedings, Vol. 1976, Chengdu, China, 2017, pp. 43 – 48.
  • [27] A. Graves, J. Schmidhuber, Framewise phoneme classification with bidirectional LSTM networks, in: IEEE International Joint Conference on Neural Networks, IJCNN ’05. Proceedings, 2005, pp. 2047–2052 vol. 4.
  • [28] J. D. Lafferty, A. Mccallum, F. C. N. Pereira, Conditional random fields: probabilistic models for segmenting and labeling sequence data, in: Eighteenth International Conference on Machine Learning, 2001, pp. 282–289.
  • [29] L. R. Rabiner, A tutorial on hidden markov models and selected applications in speech recognition, Readings in Speech Recognition 77 (2) (1990) 267–296.
  • [30] H. Lin, Y. Li, Z. Yang, Incorporating dictionary features into conditional random fields for gene/protein named entity recognition, in: Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer, 2007, pp. 162–173.
  • [31] D. Li, K. Kipper-Schuler, G. Savova, Conditional random fields and support vector machines for disorder named entity recognition in clinical texts, in: Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing, BioNLP 2008, Columbus, Ohio, June 19, 2008, 2008, pp. 94–95.
  • [32] R. L. Gai, F. Gao, L. M. Duan, X. H. Sun, H. Z. Li, Bidirectional maximal matching word segmentation algorithm with rules, in: Progress in Applied Sciences, Engineering and Technology, Vol. 926 of Advanced Materials Research, Trans Tech Publications, 2014, pp. 3368–3372.
  • [33] Y. Liu, Y. Zhou, S. Wen, C. Tang, A strategy on selecting performance metrics for classifier evaluation, International Journal of Mobile Computing & Multimedia Communications 6 (4) (2014) 20–35.
  • [34] T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space, Computer Science.
  • [35] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting, Journal of Machine Learning Research 15 (1) (2014) 1929–1958.
  • [36] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980.
  • [37] Z. Li, Q. Zhang, Y. Liu, D. Feng, Z. Huang, Recurrent neural networks with specialized word embedding for chinese clinical named entity recognition, in: CEUR Workshop Proceedings, Vol. 1976, Chengdu, China, 2017, pp. 55 – 60.
  • [38] E. Ouyang, Y. Li, L. Jin, Z. Li, X. Zhang, Exploring n-gram character presentation in bidirectional RNN-CRF for chinese clinical named entity recognition, in: CEUR Workshop Proceedings, Vol. 1976, Chengdu, China, 2017, pp. 37 – 42.
  • [39] J. Hu, X. Shi, Z. Liu, X. Wang, Q. Chen, B. Tang, HITSZ_CNER: A hybrid system for entity recognition from chinese clinical text, in: CEUR Workshop Proceedings, Vol. 1976, Chengdu, China, 2017, pp. 25 – 30.