Sequence labeling is a problem in which a label is assigned to each word in an input sentence. In many label sets, each label consists of different types of elements. For example, IOB-format entity labels Ramshaw and Marcus (1995), such as B-Person and I-Location, can be decomposed into span (e.g., B, I and O) and type information (e.g., Person and Location). Also, morphological feature tags More et al. (2018), such as Gender=Masc|Number=Sing, can be decomposed into gender, number and other information.
, however, do not consider such components. Specifically, the probability that each word is assigned a label is computed on the basis of the inner product between word representation and label embedding (see Equation2 in Section 2.1). Here, the label embedding is associated with each label and independently trained without considering its components. This means that labels are treated as mutually exclusive. In fact, labels often share some components. Consider the labels B-Person and I-Person. They share the component Person, and injecting such component information into the label embeddings can improve the generalization performance.
Motivated by this, we propose a method that shares and learns the embeddings of label components (see details in Section 2.2). Specifically, we first decompose each label into its components. We then assign an embedding to each component and summarize the embeddings of all the components into one as a label embedding used in a model. This component-level operation enables the model to share information on the common components across label embeddings.
To investigate the effectiveness of our method, we take the task of fine-grained Named Entity Recognition (NER) as a case study. Typically, in this task, a large number of entity-type labels are predefined in a hierarchical structure, and intermediate type labels can be used as label components, as well as leaf type labels and B/I-labels. In this sense, the fine-grained NER can be seen as a good example of the potential applications of the proposed method. Furthermore, some entity labels occur more frequently than others. An interesting question is whether our method of label component sharing exhibits an improvement in recognizing entities of infrequent labels. In our experiments, we use the English and Japanese NER corpora with the Extended Named Entity Hierarchy Sekine et al. (2002) including 200 entity tags. To sum up, our main contributions are as follows: (i) we propose a method that shares and learns label component embeddings, and (ii) through experiments on English and Japanese fine-grained NER, we demonstrate that the proposed method achieves better performance than a standard sequence labeling model, especially for instances with low-frequency labels.
2.1 Baseline model
We describe our baseline model in Figure 1
. Given an input sentence, the encoder converts each word into its feature vector. Then, the inner product between each feature vector and label embedding is calculated for computing the label distribution. Finally, the IOB2-format labelRamshaw and Marcus (1995) with the highest probability is assigned to each token. The label B-Park, indicating the leftmost token of some entity, is assigned to 南 (South), and I-Park, indicating the token inside some entity, is assigned to 公園 (Park). The label O, indicating the token outside entities, is assigned to に (to) and 行く (go).
Formally, for each word in the input sentence , the model outputs the label with the highest probability:
is a label set defined in each data set. The probability distribution is calculated as
where is a weight matrix for the label set .111 is the number of dimensions of each weight vector. Each row of this matrix is associated with each label , and represents the -th row vector.
represents the vector encoded by a neural-network-based encoder.
2.2 Embeddings of label components
We propose to integrate label component information as embeddings into models. This procedure consists of two steps: (i) label decomposition and (ii) label embedding calculation.
We first decompose each label into its components. Each label consists of multiple types of components. Consider the following example.
The labels defined in a general entity tag set consist of the IOB (e.g., B) and entity (e.g., Park) component types. Consider another example.
The labels defined in the Extended Named Entity tag set Sekine et al. (2002) consist of the four component types: IOB (e.g., B), top layer of the entity tag hierarchy (e.g., Facility), second layer (e.g., GOE) the third layer (e.g., Park). In this way, we can regard each label as a set of components (type–value pairs).
Formally, components of each label will be denoted by , where is the index associated with the value of each component type . The above two examples are represented as and . This formalization is applicable to arbitrary label sets whose label consists of type-value components.
Label embedding calculation
We then assign an embedding (i.e., trainable vector representation) to each label component and combining the embeddings of all the components within a label into one label embedding. In this study, we investigate two types of typical summarizing techniques: (a) summation and (b) concatenation.
The embedding of each label, , is calculated by summing the embeddings of its components:
Here, is an embedding matrix for each component type , and denotes the -th row vector. Figure 2 illustrates this calculation process. The label B-Facility/GOE/Park consists of four components (i.e., B, Facility, GOE and Park), each value of which is associated with a row vector of each matrix .
The embedding of each label, , is calculated by concatenating the embeddings of its components:
Here, similarly to is an embedding matrix for each component type Equation 3.
Unlike Equation 3, the label component embeddings are concatenated into one embedding.
Compared with the summation, one disadvantage of the concatenation is memory efficiency: the number of dimensions of the label embeddings increases according to the number of label components .
Our label embedding calculation enables models to share the embeddings of label components commonly shared across labels. For example, the embeddings of both B-Facility/GOE/Park and B-Facility/GOE/School are calculated by adding the embeddings of the shared components (i.e., B, Facility and GOE). Equations 3 and 4 can be regarded as a general form of the hierarchical label matrix proposed by Shimaoka et al. (2017) because our method can treat not only hierarchical structures but also any type of type–value set, such as morphological feature labels (e.g. Gender=Masc|Number=Sing).
|# of Sentences||# of Entities||# of Sentences||# of Entities|
We use the Extended Named Entity Corpus for English222We e-mailed the authors of Mai et al. (2018) and received the English dataset. and Japanese.333https://www.gsk.or.jp/catalog/gsk2014-a/ fine-grained NER Mai et al. (2018) In this dataset, each NE is assigned one of 200 entity labels defined in the Extended Named Entity Hierarchy Sekine et al. (2002). For the English dataset, we follow the training/development/test split defined by Mai et al. (2018). For the Japanese dataset, we follow the training/development/test split of Universal Dependencies (UD) Japanese-BCCWJ. Asahara et al. (2018)444https://github.com/UniversalDependencies/UD_Japanese-BCCWJ Table 1 shows the statistics of the dataset.
There is a gap between the frequencies, i.e., how many times each label appears in the training set. We categorize each label into three classes on the basis of its frequency, shown in Table 2. For example, if a label appears – times in the training set, it is categorized into the “Low” class. Moreover, we denote how many times entities with the labels belonging to each frequency class appear in the development or test set. To better understand the model behavior, we investigate the performance of each frequency class.
As the encoder in Equation 2 in Section 2.1, we use BERT555 We use the open-source NER model utilizing BERT:
We use the open-source NER model utilizing BERT:https://github.com/kamalkraj/BERT-NER. Devlin et al. (2019), which is a state-of-the-art language model.666The state of the art model on the Extended Named Entity Corpus is the LSTM + CNN + CRF model that uses dictionary information Mai et al. (2018) As the baseline model, we use the general label embedding matrix without considering label components, i.e., each label embedding in Equation 2 is randomly initialized and independently learned. In contrast, our proposed model calculates the label embedding matrix from label components (Equations 3 and 4). The only difference between these models is the label embedding matrix, so if a performance gap between them is observed, it stems from this point.
The overall settings of hyperparameters are the same between the baseline and the proposed model. For English, we use the BERT pre-trained on BooksCorpus and English WikipediaDevlin et al. (2019). For Japanese, we use the BERT pre-trained on Japanese Wikipedia Shibata et al. (2019)
. We fine-tune them on the Extended NER corpus for solving fine-grained NER. We set the training epochs toin fine-tuning. Both the baseline and the proposed models are trained to minimize cross-entropy loss during training. We set a batch size of and a learning rate of using Adam Kingma and Ba (2015) for the optimizer. We choose the dropout rate from among on the basis of the F scores in each development set.777In our experiments, we found that the models trained with the dropout rate of achieved the best performance on each development set. We set the number of dimensions of the hidden states in BERT. In the baseline model, we set the number of dimensions of the label embedding in Equation 2 to . In the proposed models, we also use the same dimension size for in Equations 3 and 4.
We report averaged F scores across five different runs of the model training with random seeds. Table 3 shows F scores for overall classes and each label frequency class on each test set.
For the overall labels, the proposed models (Proposed:Sum and Proposed:Concat) outperformed the baseline model on English and Japanese datasets. These results suggest the effectiveness of our proposed method for calculating the label embeddings from label components.
scores and standard deviations on each test set.
Performance for each frequency class
For all the label frequency classes, the proposed model with summation (Proposed:Sum) yielded the best results among the three models. In particular, for low-frequency labels, the proposed model with summation (Proposed:Sum) achieved a remarkable improvement of F compared with the baseline model. Also, the proposed model with concatenation (Proposed:Concat) achieved an improvement of F. These results suggest that exploiting label embeddings of the components shared across labels improves the generalization performance, especially for low-frequency labels.
Recall that the entity tag set used in the datasets has a hierarchical structure. This means that label components at higher layers appear more frequently than those at lower layers and are shared across many labels. As shown in Table 3, the proposed models achieve performance improvements for low-frequency labels. Here, we can expect that the embeddings of high-frequency shared label components help the model correctly predict the low-frequency labels. To verify this hypothesis, we compare between F scores of the baseline and proposed models, shown in Table 4. Here, the targets to investigate are the three-layered, low-frequency labels888We exclude the labels that consist of only two layers, such as Timex/Date. that have a high-frequency, second layer component.999In this paper, we also regard the second-layer components appearing over 100 times in the training set as high-frequency. As shown in Table 4, the Proposed:Sum model outperformed the baseline model. This indicates that for predicting low-frequency labels, it is effective for the model to use shared components. On the other hand, the Proposed:Concat model underperformed the baseline model. One possible reason is that the model obtains less information by concatenating label embeddings than by summing them.
3.4 Visualization of label embedding spaces
To better understand the label embeddings created from the label components by our proposed method, we visualize the learned label embeddings. Specifically, we hypothesize that the embeddings of the labels sharing label components are close to each other and form clusters in the embedding space if they successfully encode the shared label component information. To verify this hypothesis, we use the t-SNE algorithm van der Maaten and Hinton (2008) to map the label embeddings learned by the baseline and proposed models onto the two-dimensional space, shown in Figure 3. As we expected, some clusters were formed in the label embedding space learned by the proposed model, shown in Figure 3, while there is no distinct cluster in the one learned by the baseline, shown in Figure 3. By looking at them in detail, we obtained two findings. First, in the embedding space learned by the proposed model, we found that two distinct clusters were formed corresponding to the two span labels (i.e. B and I). Second, the labels that have the same top layer label (represented in the same color) also formed some smaller clusters within the B and I-label clusters. For example, Figure 3 shows the Product cluster whose members are the labels sharing the top layer label Product. From these figures, we could confirm that the embeddings of the labels sharing label components (span and upper-layer type labels) form the clusters.
4 Related work
Sequence labeling has been widely studied and applied to many tasks, such as Chunking Ramshaw and Marcus (1995); Hashimoto et al. (2017), NER Ma and Hovy (2016); Chiu and Nichols (2016) and Semantic Role Labeling (SRL) Zhou and Xu (2015); He et al. (2017). In English fine-grained entity recognition, Ling and Weld (2012) created a standard fine-grained entity typing dataset with multi-class, multi-label annotations. Ringland et al. (2019) developed a dataset for nested NER dataset. These datasets independently handle each label without considering label components. In Japanese NER, Misawa et al. (2017) combined word and character information to improve performance. Mai et al. (2018) reported that dictionary information improves the performance of fine-grained NER. Their methods do not consider label components and are orthogonal to our method.
Some existing studies take shared components (or information) across labels into account. In Entity Typing, Ma et al. (2016) and Shimaoka et al. (2017) proposed to calculate entity label embeddings by considering a label hierarchical structure. While their method is limited to only a hierarchical structure, our method can be applied to any set of components and can be regarded as a general form of their method. In multi-label classification, Zhong et al. (2018) assumed that the labels co-occurring in many instances are correlated with each other and share some common features, and proposed a method that learns a feature (label embedding) space where such co-occurring labels are close to each other. The work of Matsubayashi et al. (2009) is the closest to ours in terms of decomposing the features of labels. They regard an original label comprising a mixture of components as a set of multiple labels and made models that are able to exploit the multiple components to effectively learn in the SRL task.
We proposed a method that shares and learns the embeddings of label components. Through experiments on English and Japanese fine-grained NER, we demonstrated that our proposed method improves the performance, especially for instances with low-frequency labels. For future work, we envision to apply our method to other tasks and datasets and investigate the effectiveness. Also, we plan to extend the simple label embedding calculation methods to more sophisticated ones.
This work was partially supported by JSPS KAKENHI Grant Number JP19H04162 and JP19K20351. This work was also partially supported by a Bilateral Joint Research Program between RIKEN AIP Center and Tohoku University. We would like to thank the members of Tohoku NLP Laboratory, the anonymous reviewers, and the SRW mentor Gabriel Stanovsky for their insightful comments. We also appreciate Alt inc. for providing the corpus of English extended named entity data.
- Asahara et al. (2018) Masayuki Asahara, Hiroshi Kanayama, Takaaki Tanaka, Yusuke Miyao, Sumire Uematsu, Shinsuke Mori, Yuji Matsumoto, Mai Omura, and Yugo Murawaki. 2018. Universal dependencies version 2 for Japanese. In Proceedings of LREC, pages 1824–1831.
- Chiu and Nichols (2016) Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357–370.
- Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186.
- Hashimoto et al. (2017) Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of EMNLP, pages 1923–1933.
- He et al. (2017) Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of ACL, pages 473–483.
- Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.
- Lample et al. (2016) Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260–270.
Ling and Weld (2012)
Xiao Ling and Daniel S. Weld. 2012.
Fine-grained entity recognition.
Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 22-26, 2012, Toronto, Ontario, Canada.
- Ma and Hovy (2016) Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of ACL, pages 1064–1074.
- Ma et al. (2016) Yukun Ma, Erik Cambria, and Sa Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In Proceedings of COLING, pages 171–180.
van der Maaten and Hinton (2008)
Laurens van der Maaten and Geoffrey Hinton. 2008.
Visualizing data using t-SNE.
Journal of Machine Learning Research, 9:2579–2605.
- Mai et al. (2018) Khai Mai, Thai-Hoang Pham, Minh Trung Nguyen, Tuan Duc Nguyen, Danushka Bollegala, Ryohei Sasano, and Satoshi Sekine. 2018. An empirical study on fine-grained named entity recognition. In Proceedings of COLING, pages 711–722.
- Matsubayashi et al. (2009) Yuichiroh Matsubayashi, Naoaki Okazaki, and Jun’ichi Tsujii. 2009. A comparative study on generalization of semantic roles in FrameNet. In Proceedings of ACL and AFNLP, pages 19–27.
- Misawa et al. (2017) Shotaro Misawa, Motoki Taniguchi, Yasuhide Miura, and Tomoko Ohkuma. 2017. Character-based bidirectional LSTM-CRF with words and characters for Japanese named entity recognition. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 97–102.
- More et al. (2018) Amir More, Özlem Çetinoğlu, Çağrı Çöltekin, Nizar Habash, Benoît Sagot, Djamé Seddah, Dima Taji, and Reut Tsarfaty. 2018. CoNLL-UL: Universal morphological lattices for universal dependency parsing. In Proceedings of LREC, pages 3847–3853.
- Ramshaw and Marcus (1995) Lance Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of Third Workshop on Very Large Corpora, pages 82–94.
- Ringland et al. (2019) Nicky Ringland, Xiang Dai, Ben Hachey, Sarvnaz Karimi, Cécile Paris, and James R. Curran. 2019. NNE: A dataset for nested named entity recognition in english newswire. In Proceedings of ACL, pages 5176–5181.
- Sekine et al. (2002) Satoshi Sekine, Kiyoshi Sudo, and Chikashi Nobata. 2002. Extended named entity hierarchy. In Proceedings of LREC, pages 1818–1824.
- Shibata et al. (2019) Tomohide Shibata, Daisuke Kawahara, and Sadao Kurohashi. 2019. Improving japanese syntax parsing with bert. In Natural Language Processing, pages 205–208.
- Shimaoka et al. (2017) Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. Neural architectures for fine-grained entity type classification. In Proceedings of EACL, pages 1271–1280.
- Zhong et al. (2018) Yongjian Zhong, Chang Xu, Bo Du, and Lefei Zhang. 2018. Independent feature and label components for multi-label classification. In IEEE International Conference on Data Mining, ICDM 2018, Singapore, November 17-20, 2018, pages 827–836.
- Zhou and Xu (2015) Jie Zhou and Wei Xu. 2015. End-to-end learning of semantic role labeling using recurrent neural networks. In Proceedings of ACL, pages 1127–1137.
Appendix A Appendices
a.1 Additional results
Performance for each hierarchical category
Table 5 shows F scores for each hierarchical category. The proposed model with summation (Proposed:Sum) outperformed the other models in all the hierarchical categories. For the labels at the top layer, in particular, Proposed:Sum achieved an improvement of the F scores by a large margin on the Japanese dataset.
Performance for entity span boundary match
Table 6 shows F scores for entity span boundary match, where we regard a predicted boundary (i.e., B and I) as correct if it matches the gold annotation regardless of its entity type label. The performance of the proposed models was comparable to the baseline model. This indicates that there is a performance difference not in identification of entity spans (entity detection) but in identification of entity types (entity typing).
a.2 Case study
|Example (a)||下呂 温泉 発祥 の 地・・・|
|(The birthplace of Gero Spa … )|
|Entity||下呂 (Gero)||温泉 (Spa)|
|Example (b)||・・・ where clavaviridae derives from .|
|Example (c)||・・・あお白い 日 の 光 ・・・|
|(… the pale sunlight … )|
We observe actual examples predicted by the proposed model with summation, shown in Table 7.
In Example (a) and (b), Both models succeeded to recognize the entity span. However, only the proposed model also correctly predicted the type label. Note that the entities Location/Spa and Natural_Object/Living_Thing/Living _Thing_Other appear rarely, but rather to the extent of the top layer components Location and Natural_Object that appear frequently in the training set. Therefore, these examples suggest that the proposed model effectively exploits shared information of label components, especially in terms of the hierarchical layer.
Although, we found that the proposed model predicts partially correct labels even though it is not totally correct in some cases. In Example (c), あお白い (pale) is categorized into Color/Color_Other, the proposed model also predicted the wrong label Color/Nature_Color. However, interestingly, the proposed model correctly recognized the top layer of the type label as Color, which is in contrast to the completely wrong prediction of the baseline model.