Effective Attention Modeling for Neural Relation Extraction

12/09/2019 ∙ by Tapas Nayak, et al. ∙ National University of Singapore 0

Relation extraction is the task of determining the relation between two entities in a sentence. Distantly-supervised models are popular for this task. However, sentences can be long and two entities can be located far from each other in a sentence. The pieces of evidence supporting the presence of a relation between two entities may not be very direct, since the entities may be connected via some indirect links such as a third entity or via co-reference. Relation extraction in such scenarios becomes more challenging as we need to capture the long-distance interactions among the entities and other words in the sentence. Also, the words in a sentence do not contribute equally in identifying the relation between the two entities. To address this issue, we propose a novel and effective attention model which incorporates syntactic information of the sentence and a multi-factor attention mechanism. Experiments on the New York Times corpus show that our proposed model outperforms prior state-of-the-art models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

MFA4RE

Code for modeling attention network for distant supervised relation extraction


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Relation extraction from unstructured text is an important task to build knowledge bases (KB) automatically. banko2007open used open information extraction (Open IE) to extract relation triples from sentences where verbs were considered as the relation, whereas supervised information extraction systems extract a set of pre-defined relations from text. mintz2009distant, riedel2010modeling, and hoffmann2011knowledge proposed distant supervision to generate the training data for sentence-level relation extraction, where relation tuples (two entities and the relation between them) from a knowledge base such as Freebase bollacker2008freebase were mapped to free text (Wikipedia articles or New York Times articles). The idea is that if a sentence contains both entities of a tuple, it is chosen as a training sentence of that tuple. Although this process can generate some noisy training instances, it can give a significant amount of training data which can be used to build supervised models for this task.

mintz2009distant, riedel2010modeling, and hoffmann2011knowledge proposed feature-based learning models and used entity tokens and their nearby tokens, their part-of-speech tags, and other linguistic features to train their models. Recently, many neural network-based models have been proposed to avoid feature engineering. zeng2014relation,zeng2015distant used convolutional neural networks (CNN) with max-pooling to find the relation between two given entities. Though these models have been shown to perform reasonably well on distantly supervised data, they sometimes fail to find the relation when sentences are long and entities are located far from each other. CNN models with max-pooling have limitations in understanding the semantic similarity of words with the given entities and they also fail to capture the long-distance dependencies among the words and entities such as co-reference. In addition, all the words in a sentence may not be equally important in finding the relation and this issue is more prominent in long sentences. Prior CNN-based models have limitations in identifying the multiple important factors to focus on in sentence-level relation extraction.

To address this issue, we propose a novel multi-factor attention model111The code and data of this paper can be found at https://github.com/nusnlp/MFA4RE focusing on the syntactic structure of a sentence for relation extraction. We use a dependency parser to obtain the syntactic structure of a sentence. We use a linear form of attention to measure the semantic similarity of words with the given entities and combine it with the dependency distance of words from the given entities to measure their influence in identifying the relation. Also, single attention may not be able to capture all pieces of evidence for identifying the relation due to normalization of attention scores. Thus we use multi-factor attention in the proposed model. Experiments on the New York Times (NYT) corpus show that the proposed model outperforms prior work in terms of F1 scores on sentence-level relation extraction.

2 Task Description

Sentence-level relation extraction is defined as follows: Given a sentence and two entities marked in the sentence, find the relation between these two entities in from a pre-defined set of relations . None indicates that none of the relations in holds between the two marked entities in the sentence. The relation between the entities is argument order-specific, i.e., and are not the same. Input to the system is a sentence and two entities and , and output is the relation .

3 Model Description

We use four types of embedding vectors in our model: (1) word embedding vector

(2) entity token indicator embedding vector , which indicates if a word belongs to entity , entity , or does not belong to any entity (3) a positional embedding vector which represents the linear distance of a word from the start token of entity (4) another positional embedding vector which represents the linear distance of a word from the start token of entity .

We use a bi-directional long short-term memory (Bi-LSTM)

hochreiter1997long layer to capture the interaction among words in a sentence , where is the sentence length. The input to this layer is the concatenated vector of word embedding vector and entity token indicator embedding vector .

and are the output at the th step of the forward LSTM and backward LSTM respectively. We concatenate them to obtain the th Bi-LSTM output .

3.1 Global Feature Extraction

We use a convolutional neural network (CNN) to extract the sentence-level global features for relation extraction. We concatenate the positional embeddings and

of words with the hidden representation of the Bi-LSTM layer and use the convolution operation with max-pooling on concatenated vectors to extract the global feature vector.

is the concatenated vector for the th word and is a convolutional filter vector of dimension where is the filter width. The index moves from to and produces a set of scalar values . The max-pooling operation chooses the maximum from these values as a feature. With number of filters, we get a global feature vector .

Figure 1: Architecture of our attention model with

. We have not shown the CNN-based global feature extraction here. FFN=feed-forward network.

3.2 Attention Modeling

Figure 1 shows the architecture of our attention model. We use a linear form of attention to find the semantically meaningful words in a sentence with respect to the entities which provide the pieces of evidence for the relation between them. Our attention mechanism uses the entities as attention queries and their vector representation is very important for our model. Named entities mostly consist of multiple tokens and many of them may not be present in the training data or their frequency may be low. The nearby words of an entity can give significant information about the entity. Thus we use the tokens of an entity and its nearby tokens to obtain its vector representation. We use the convolution operation with max-pooling in the context of an entity to get its vector representation.

is a convolutional filter vector of size where is the filter width and is the concatenated vector of word embedding vector () and entity token indicator embedding vector (). and are the start and end index of the sequence of words comprising an entity and its neighboring context in the sentence, where . The index moves from to and produces a set of scalar values . The max-pooling operation chooses the maximum from these values as a feature. With number of filters, we get the entity vector . We do this for both entities and get and as their vector representation. We adopt a simple linear function as follows to measure the semantic similarity of words with the given entities:

is the Bi-LSTM hidden representation of the th word. are trainable weight matrices. and represent the semantic similarity score of the th word and the two given entities.

Not all words in a sentence are equally important in finding the relation between the two entities. The words which are closer to the entities are generally more important. To address this issue, we propose to incorporate the syntactic structure of a sentence in our attention mechanism. The syntactic structure is obtained from the dependency parse tree of the sentence. Words which are closer to the entities in the dependency parse tree are more relevant to finding the relation. In our model, we define the dependency distance to every word from the head token (last token) of an entity as the number of edges along the dependency path (See Figure 2 for an example). We use a distance window size and words whose dependency distance is within this window receive attention and the other words are ignored. The details of our attention mechanism follow.

and are un-normalized attention scores and and are the normalized attention scores for the th word with respect to entity 1 and entity 2 respectively. and are the dependency distances of the th word from the two entities. We mask those words whose average dependency distance from the two entities is larger than . We use the semantic meaning of the words and their dependency distance from the two entities together in our attention mechanism. The attention feature vectors and with respect to the two entities are determined as follows:

Figure 2: An example dependency tree. The two numbers indicate the distance of the word from the head token of the two entities respectively along the dependency tree path.

3.3 Multi-Factor Attention

Two entities in a sentence, when located far from each other, can be linked via more than one co-reference chain or more than one important word. Due to the normalization of the attention scores as described above, single attention cannot capture all relevant information needed to find the relation between two entities. Thus we use a multi-factor attention mechanism, where the number of factors is a hyper-parameter, to gather all relevant information for identifying the relation. We replace the attention matrix

with an attention tensor

where is the factor count. This gives us attention vectors with respect to each entity. We concatenate all the feature vectors obtained using these attention vectors to get the multi-attentive feature vector .

3.4 Relation Extraction

We concatenate , , , and

, and this concatenated feature vector is given to a feed-forward layer with softmax activation to predict the normalized probabilities for the relation labels.

is the weight matrix,

is the bias vector of the feed-forward layer for relation extraction, and

is the vector of normalized probabilities of relation labels.

3.5 Loss Function

We calculate the loss over each mini-batch of size . We use the following negative log-likelihood as our objective function for relation extraction:

where is the conditional probability of the true relation when the sentence , two entities and , and the model parameters are given.

4 Experiments

4.1 Datasets

We use the New York Times (NYT) corpus riedel2010modeling in our experiments. There are two versions of this corpus: (1) The original NYT corpus created by riedel2010modeling which has valid relations and a None relation. We name this dataset NYT10. The training dataset has instances and of the instances belong to the None relation and the remaining instances have valid relations. The test dataset has instances and of the instances belong to the None relation and the remaining instances have valid relations. Both the training and test datasets have been created by aligning Freebase bollacker2008freebase tuples to New York Times articles. (2) Another version created by hoffmann2011knowledge which has valid relations and a None relation. We name this dataset NYT11. The corresponding statistics for NYT11 are given in Table 1. The training dataset is created by aligning Freebase tuples to NYT articles, but the test dataset is manually annotated.

NYT10 NYT11
# relations 53 25
Train # instances 455,412 335,843
# valid relation tuples 124,636 100,671
# None relation tuples 330,776 235,172
avg. sentence length 41.1 37.2
avg. distance between
entity pairs
12.8 12.2
Test # instances 172,415 1,450
# valid relation tuples 6,441 520
# None relation tuples 165,974 930
avg. sentence length 41.7 39.7
avg. distance between
entity pairs
13.1 11.0
Table 1: Statistics of the NYT10 and NYT11 dataset.

4.2 Evaluation Metrics

We use precision, recall, and F1 scores to evaluate the performance of models on relation extraction after removing the None labels. We use a confidence threshold to decide if the relation of a test instance belongs to the set of relations or None. If the network predicts None for a test instance, then it is considered as None only. But if the network predicts a relation from the set and the corresponding softmax score is below the confidence threshold, then the final class is changed to None. This confidence threshold is the one that achieves the highest F1 score on the validation dataset. We also include the precision-recall curves for all the models.

4.3 Parameter Settings

We run word2vec mikolov2013efficient on the NYT corpus to obtain the initial word embeddings with dimension and update the embeddings during training. We set the dimension of entity token indicator embedding vector and positional embedding vector . The hidden layer dimension of the forward and backward LSTM is , which is the same as the dimension of input word representation vector . The dimension of Bi-LSTM output is . We use filters of width for feature extraction whenever we apply the convolution operation. We use dropout in our network with a dropout rate of

, and in convolutional layers, we use the tanh activation function. We use the sequence of tokens starting from

words before the entity to words after the entity as its context. We train our models with mini-batch size of and optimize the network parameters using the Adagrad optimizer duchi2011adaptive. We use the dependency parser from spaCy222https://spacy.io/ to obtain the dependency distance of the words from the entities and use as the window size for dependency distance-based attention.

NYT10 NYT11
Model Prec. Rec. F1 Prec. Rec. F1
CNN zeng2014relation 0.413 0.591 0.486 0.444 0.625 0.519
PCNN zeng2015distant 0.380 0.642 0.477 0.446 0.679 0.538
EA huang2016attention 0.443 0.638 0.523 0.419 0.677 0.517
BGWA jat2018attention 0.364 0.632 0.462 0.417 0.692 0.521
BiLSTM-CNN 0.490 0.507 0.498 0.473 0.606 0.531
Our model 0.541 0.595 0.566* 0.507 0.652 0.571*
Table 2: Performance comparison of different models on the two datasets. * denotes a statistically significant improvement over the previous best state-of-the-art model with

under the bootstrap paired t-test.

denotes the previous best state-of-the-art model.

4.4 Comparison to Prior Work

We compare our proposed model with the following state-of-the-art models.

(1) CNN zeng2014relation

: Words are represented using word embeddings and two positional embeddings. A convolutional neural network (CNN) with max-pooling is applied to extract the sentence-level feature vector. This feature vector is passed to a feed-forward layer with softmax to classify the relation.

(2) PCNN zeng2015distant: Words are represented using word embeddings and two positional embeddings. A convolutional neural network (CNN) is applied to the word representations. Rather than applying a global max-pooling operation on the entire sentence, three max-pooling operations are applied on three segments of the sentence based on the location of the two entities (hence the name Piecewise Convolutional Neural Network (PCNN)). The first max-pooling operation is applied from the beginning of the sentence to the end of the entity appearing first in the sentence. The second max-pooling operation is applied from the beginning of the entity appearing first in the sentence to the end of the entity appearing second in the sentence. The third max-pooling operation is applied from the beginning of the entity appearing second in the sentence to the end of the sentence. Max-pooled features are concatenated and passed to a feed-forward layer with softmax to determine the relation.

(3) Entity Attention (EA) huang2016attention: This is the combination of a CNN model and an attention model. Words are represented using word embeddings and two positional embeddings. A CNN with max-pooling is used to extract global features. Attention is applied with respect to the two entities separately. The vector representation of every word is concatenated with the word embedding of the last token of the entity. This concatenated representation is passed to a feed-forward layer with tanh activation and then another feed-forward layer to get a scalar attention score for every word. The original word representations are averaged based on the attention scores to get the attentive feature vectors. A CNN-extracted feature vector and two attentive feature vectors with respect to the two entities are concatenated and passed to a feed-forward layer with softmax to determine the relation.

(4) BiGRU Word Attention (BGWA) jat2018attention

: Words are represented using word embeddings and two positional embeddings. They are passed to a bidirectional gated recurrent unit (BiGRU)

cho2014properties layer. Hidden vectors of the BiGRU layer are passed to a bilinear operator (a combination of two feed-forward layers) to compute a scalar attention score for each word. Hidden vectors of the BiGRU layer are multiplied by their corresponding attention scores. A piece-wise CNN is applied on the weighted hidden vectors to obtain the feature vector. This feature vector is passed to a feed-forward layer with softmax to determine the relation.

(5) BiLSTM-CNN: This is our own baseline. Words are represented using word embeddings and entity indicator embeddings. They are passed to a bidirectional LSTM. Hidden representations of the LSTMs are concatenated with two positional embeddings. We use CNN and max-pooling on the concatenated representations to extract the feature vector. Also, we use CNN and max-pooling on the word embeddings and entity indicator embeddings of the context words of entities to obtain entity-specific features. These features are concatenated and passed to a feed-forward layer to determine the relation.

Figure 3: Precision-Recall curve for the NYT10 dataset.
Figure 4: Precision-Recall curve for the NYT11 dataset.

4.5 Experimental Results

We present the results of our final model on the relation extraction task on the two datasets in Table 2. Our model outperforms the previous state-of-the-art models on both datasets in terms of F1 score. On the NYT10 dataset, it achieves higher F1 score compared to the previous best state-of-the-art model EA. Similarly, it achieves higher F1 score compared to the previous best state-of-the-model PCNN on the NYT11 dataset. Our model improves the precision scores on both datasets with good recall scores. This will help to build a cleaner knowledge base with fewer false positives. We also show the precision-recall curves for the NYT10 and NYT11 datasets in Figures 3 and 4 respectively. The goal of any relation extraction system is to extract as many relations as possible with minimal false positives. If the recall score becomes very low, the coverage of the KB will be poor. From Figure 3, we observe that when the recall score is above , our model achieves higher precision than all the competing models on the NYT10 dataset. On the NYT11 dataset (Figure 4), when recall score is above , our model achieves higher precision than the competing models. Achieving higher precision with high recall score helps to build a cleaner KB with good coverage.

Figure 5: Performance comparison across different sentence lengths on the NYT10 dataset.
Figure 6: Performance comparison across different sentence lengths on the NYT11 dataset.

5 Analysis and Discussion

5.1 Varying the number of factors ()

We investigate the effects of the multi-factor count in our final model on the test datasets in Table 3. We observe that for the NYT10 dataset, gives good performance with achieving the highest F1 score. On the NYT11 dataset, gives the best performance. These experiments show that the number of factors giving the best performance may vary depending on the underlying data distribution.

NYT10 NYT11
Prec. Rec. F1 Prec. Rec. F1
0.541 0.595 0.566 0.495 0.621 0.551
0.521 0.597 0.556 0.482 0.656 0.555
0.490 0.617 0.547 0.509 0.633 0.564
0.449 0.623 0.522 0.507 0.652 0.571
0.467 0.609 0.529 0.488 0.677 0.567
Table 3: Performance comparison of our model with different values of on the two datasets.
Figure 7: Performance comparison across different distances between entities on the NYT10 dataset.
Figure 8: Performance comparison across different distances between entities on the NYT11 dataset.

5.2 Effectiveness of Model Components

We include the ablation results on the NYT11 dataset in Table 4. When we add multi-factor attention to the baseline BiLSTM-CNN model without the dependency distance-based weight factor in the attention mechanism, we get F1 score improvement (A2A1). Adding the dependency weight factor with a window size of improves the F1 score by (A3A2). Increasing the window size to reduces the F1 score marginally (A3A4). Replacing the attention normalizing function with softmax operation also reduces the F1 score marginally (A3A5). In our model, we concatenate the features extracted by each attention layer. Rather than concatenating them, we can apply max-pooling operation across the multiple attention scores to compute the final attention scores. These max-pooled attention scores are used to obtain the weighted average vector of Bi-LSTM hidden vectors. This affects the model performance negatively and F1 score of the model decreases by (A3A6).

Prec. Rec. F1
(A1) BiLSTM-CNN 0.473 0.606 0.531
(A2) Standard attention 0.466 0.638 0.539
(A3) Window size 0.507 0.652 0.571
(A4) Window size 0.510 0.640 0.568
(A5) Softmax 0.490 0.658 0.562
(A6) Max-pool 0.492 0.600 0.541
Table 4: Effectiveness of model components () on the NYT11 dataset.

5.3 Performance with Varying Sentence Length and Varying Entity Pair Distance

We analyze the effects of our attention model with different sentence lengths in the two datasets in Figures 5 and 6. We also analyze the effects of our attention model with different distances between the two entities in the two datasets in Figures 7 and 8. We observe that with increasing sentence length and increasing distance between the two entities, the performance of all models drops. This shows that finding the relation between entities located far from each other is a more difficult task. Our multi-factor attention model with dependency-distance weight factor increases the F1 score in all configurations when compared to previous state-of-the-art models on both datasets.

6 Related Work

Relation extraction from a distantly supervised dataset is an important task and many researchers mintz2009distant; riedel2010modeling; hoffmann2011knowledge tried to solve this task using feature-based classification models. Recently, zeng2014relation, zeng2015distant used CNN models for this task which can extract features automatically. huang2016attention and jat2018attention used attention mechanism in their model to improve performance. mimlre, lin2016neural, vashishth2018reside, wu2018improving, and ye2019intra used multiple sentences in a multi-instance relation extraction setting to capture the features located in multiple sentences for a pair of entities. In their evaluation setting, they evaluated model performance by considering multiple sentences having the same pair of entities as a single test instance. On the other hand, our model and the previous models that we compare to in this paper zeng2014relation; zeng2015distant; huang2016attention; jat2018attention work on each sentence independently and are evaluated at the sentence level. Since there may not be multiple sentences that contain a pair of entities, it is important to improve the task performance at the sentence level. Future work can explore the integration of our sentence-level attention model in a multi-instance relation extraction framework.

Not much previous research has exploited the dependency structure of a sentence in different ways for relation extraction. Xu2015ClassifyingRV and miwa2016end used an LSTM network and the shortest dependency path between two entities to find the relation between them. Huang2017ImprovingSF used the dependency structure of a sentence for the slot-filling task which is close to the relation extraction task. liu-etal-2015-dependency exploited the shortest dependency path between two entities and the sub-trees attached to that path (augmented dependency path) for relation extraction. zhang-etal-2018-graph and guo2019aggcn used graph convolution networks with pruned dependency tree structures for this task. In this work, we have incorporated the dependency distance of the words in a sentence from the two entities in a multi-factor attention mechanism to improve sentence-level relation extraction.

Attention-based neural networks are quite successful for many other NLP tasks. Bahdanau2014NeuralMT and luong2015effective used attention models for neural machine translation, seo2016bidirectional used attention mechanism for answer span extraction. vaswani2017attention and kundu2018amanda used multi-head or multi-factor attention models for machine translation and answer span extraction respectively. effective2017ruidan used dependency distance-focused word attention model for aspect-based sentiment analysis.

7 Conclusion

In this paper, we have proposed a multi-factor attention model utilizing syntactic structure for relation extraction. The syntactic structure component of our model helps to identify important words in a sentence and the multi-factor component helps to gather different pieces of evidence present in a sentence. Together, these two components improve the performance of our model on this task, and our model outperforms previous state-of-the-art models when evaluated on the New York Times (NYT) corpus, achieving significantly higher F1 scores.

Acknowledgments

We would like to thank the anonymous reviewers for their valuable and constructive comments on this work.

References