Path-Based Attention Neural Model for Fine-Grained Entity Typing

10/29/2017 ∙ by Denghui Zhang, et al. ∙ University of Massachusetts Amherst Institute of Computing Technology, Chinese Academy of Sciences 0

Fine-grained entity typing aims to assign entity mentions in the free text with types arranged in a hierarchical structure. Traditional distant supervision based methods employ a structured data source as a weak supervision and do not need hand-labeled data, but they neglect the label noise in the automatically labeled training corpus. Although recent studies use many features to prune wrong data ahead of training, they suffer from error propagation and bring much complexity. In this paper, we propose an end-to-end typing model, called the path-based attention neural model (PAN), to learn a noise- robust performance by leveraging the hierarchical structure of types. Experiments demonstrate its effectiveness.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

Code Repositories

PAN

Path-Based Attention Neural Model for Fine-Grained Entity Typing


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Fine-grained entity typing aims to assign types (e.g., “person”, “politician”, etc.) to entity mentions in the local context (a single sentence), and the type set constitutes a tree-structured hierarchy (i.e., type hierarchy). Recent years witness the boost of neural models in this task, e.g., [Shimaoka et al.2016] employs an attention based LSTM to attain sentence representations and achieves state-of-the-art performance. However, it still suffers from noise in training data, which is a main challenge in this task. The training data is generated by distant supervision, which assumes that if an entity has a type in knowledge bases (KBs), then all sentences containing this entity will express this type. This method inevitably introduces irrelevant types to the context. For example, the entity “Donald Trump” has types “person”, “businessman” and “politician” in KBs, thus all three types are annotated for its mentions in the training corpora. But in sentence “Donald Trump announced his candidacy for President of US.”, only “person” and “politician” are correct types, while “businessman” can not be deduced from the sentence, thus serves as noise. To alleviate this issue, a few systems try to denoise training data by filtering irrelevant types ahead of training. For instance, [Ren et al.2016]

proposes PLE to identify correct types by jointly embedding mentions, context and type hierarchy, and then use clean data to train classifiers. However, the denoising and training process are not unified, which may cause error propagation and bring much additional complexity.

Motivated by this, we propose an end-to-end typing model, called the path-based attention neural model (PAN), to select relevant sentences to each type, which can dynamically reduce the weights of wrong labeled sentences for each type during training. This idea is inspired by some successful attempts to reduce noise in relation extraction, e.g.,[Lin et al.2016]. However, these methods fail to formulate type hierarchy, which is distinct in fine-grained entity typing. Specifically, if a sentence indicates a type, its parent type can be also deduced from the sentence. Like the example above, “politician” is the subtype of “person”. Since the sentence indicates that “Donald Trump” is “politician”, “person” should also be assigned. Thus, we build path-based attention for each type by utilizing its path to its coarsest parent type (e.g., “person”, “businessman”) in the type hierarchy. Compared to the simple attention in relation extraction, it enables parameter sharing for types in the same path. With the support of hierarchical information of types, it can reduce noise effectively and yields a better typing classifier. Experiments on two data sets validate effectiveness of PAN.

Path-Based Attention Neural Model

The architecture of PAN is illustrated in Figure1. Supposing that there are sentences containing entity , i.e., , and is the automatically labeled types based on KBs. Firstly PAN employs LSTM to generate representations of sentences following [Shimaoka et al.2016], where is the semantic representation of , . Afterwards, we build path-based attention over sentences for each type , which is expected to focus on relevant sentences to type . Then, the representation of sentence set for type , denoted by

, is calculated through weighted sum of vectors of sentences. Finally, we obtain predicted types through a classification layer.

Figure 1: The architecture of PAN for given entity , type

More precisely, given , an attention is learned to score how well sentence matches type , i.e.,

where is a weighted diagonal matrix. is the representation of path for type . Specifically, for each type, we define one path as a sequence of types starting from its coarsest parent type, and ending with it. More formally, for type , , where is its coarsest parent type, and is the subtype of . For example, for type , its path is . We represent the path as a semantic composition of all the types on the path, i.e., , where is the representation of type , which is a parameter to learn. is a composition operator. In this paper, we consider two types of operators: (1) Addition (PAN-A), where equals the sum of type vectors. (2) Multiplication (PAN-M), where equals the cumulative product of type vectors. In this way, path-based attention enables the model to share parameters between types in the same path. For example, the attention learned for “person” could assist the learning of the attention for “politician”. It makes learning easier especially for infrequent subtypes, which suffer from dearth of training data, since the attentions for these subtypes can get support from the attention for parent type.

Then, the representation of sentence set for type , i.e., , is calculated through weighted sum of sentence vectors,

Since one mention can have multiple types, we employ a classification layer consisting of logistic classifiers, where

is the total number of types. Each classifier outputs the probability of respective type, i.e.,

where

are the logistic regression parameters. To optimize the model, a multi-type loss is defined according to the cross entropy as follows,

where is indicator function to indicate whether is the annotated type of entity , i.e., .

Experiments and Conclusion

Experiments are carried on two widely used datasets OntoNotes and FIGER(GOLD), and the training dataset of OntoNotes is noisy compared to FIGER(GOLD) [Shimaoka et al.2016]. The statistics of the datasets are listed in Table1.

Datasets #Type #Layer #Context #Train #Test
OntoNotes 89 3 143K 223K 8,963
FIGER(GOLD) 113 2 1.51M 2.69M 563
Table 1: Statistics of the datasets.

We employ Strict Accuracy (Acc), Loose Macro F1 (Ma-F1), and Loose Micro F1 (Mi-F1) as evaluation measures following [Shimaoka et al.2016]. Specifically, “Strict” evaluates on the type set of each entity mention, while “Loose” on each type. “Marco” is the geometric average over all mentions, while “Micro” is the arithmetic average. The baselines are chosen from two aspects: (1) Predicting types in a unified process using raw noisy data, i.e., TLSTM [Shimaoka et al.2016], and other methods shown in Table2. (2) Predicting types using clean data by denoising ahead, i.e., H_PLE and F_PLE [Ren et al.2016]. To prove the superiority of path-based attention, we also directly apply the attention neural model in relation extraction [Lin et al.2016] without using type hierarchy (AN). The results of baselines are the best results reported in their papers.

Metric OntoNotes FIGER(GOLD)
Acc Ma-F1 Mi-F1 Acc Ma-F1 Mi-F1
HYENA 24.9 49.7 44.6 28.8 52.8 50.6
FIGER 36.9 57.8 51.6 47.4 69.2 65.5
TLSTM 50.8 70.1 64.9 59.7 79.0 75.4
AN 52.3 71.7 65.2 60.0 79.5 75.9
PAN-A 54.9 72.8 66.5 60.2 79.9 76.2
PAN-M 53.0 71.9 65.3 60.0 79.4 76.0
Table 2: Performance on FIGER(GOLD) and OntoNotes

We can observed that: (1) When using the same raw noisy data, PAN outperforms all methods on both data sets, which proves the anti-noise ability of PAN. (2) PAN performs better than AN, since the attention learned in PAN utilizes the hierarchical structure to enable parameter sharing. (3) The improvements on OntoNotes are higher than FIGER(GOLD), because OntoNotes is more noisy, and the hierarchical structure in OntoNotes is more complex with more layers, which further demonstrates that path-based attention does well with type hierarchy, and proves the superiority of PAN in reducing noise. (4) PAN-A achieves better performance than PAN-M, which shows that addition operator can better capture type hierarchy.

Metric OntoNotes FIGER(GOLD)
Acc Ma-F1 Mi-F1 Acc Ma-F1 Mi-F1
H_PLE 54.6 69.2 62.5 54.3 69.5 68.1
F_PLE 57.2 71.5 66.1 59.9 76.3 74.9
PAN-A 54.9 72.8 66.5 60.2 79.9 76.2
Table 3: Performance on FIGER(GOLD) and OntoNotes

As shown in Table3, PAN using raw noisy data outperforms H_PLE and F_PLE using denoised data on Ma-F1 and Mi-F1. It makes sense that F_PLE has higher Acc on OntoNotes since the noise is reduced before training, but it needs to learn additional parameters about mentions, context and types, while PAN only needs to learn parameters of attention. Thus, PAN is more efficient to reduce noise.

In conclusion, PAN can reduce noise effectively through an end-to-end process, and achieves better typing performance on datasets with more noise.

References

  • [Lin et al.2016] Lin, Y.; Shen, S.; Liu, Z.; and et al. 2016. Neural relation extraction with selective attention over instances. In ACL.
  • [Ren et al.2016] Ren, X.; He, W.; Qu, M.; Voss, C. R.; Ji, H.; and Han, J. 2016. Label noise reduction in entity typing by heterogeneous partial-label embedding. In KDD.
  • [Shimaoka et al.2016] Shimaoka, S.; Stenetorp, P.; Inui, K.; and Riedel, S. 2016. Neural architectures for fine-grained entity type classification. In EACL.