Exploiting Contextual Information via Dynamic Memory Network for Event Detection

10/03/2018 ∙ by Shaobo Liu, et al. ∙ Institute of Computing Technology, Chinese Academy of Sciences 0

The task of event detection involves identifying and categorizing event triggers. Contextual information has been shown effective on the task. However, existing methods which utilize contextual information only process the context once. We argue that the context can be better exploited by processing the context multiple times, allowing the model to perform complex reasoning and to generate better context representation, thus improving the overall performance. Meanwhile, dynamic memory network (DMN) has demonstrated promising capability in capturing contextual information and has been applied successfully to various tasks. In light of the multi-hop mechanism of the DMN to model the context, we propose the trigger detection dynamic memory network (TD-DMN) to tackle the event detection problem. We performed a five-fold cross-validation on the ACE-2005 dataset and experimental results show that the multi-hop mechanism does improve the performance and the proposed model achieves best F_1 score compared to the state-of-the-art methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

According to ACE (Automatic Content Extraction) event extraction program, an event is identified by a word or a phrase called event trigger which most represents that event. For example, in the sentence “No major explosion we are aware of”, an event trigger detection model is able to identify the word “explosion” as the event trigger word and further categorize it as an Attack event. The ACE-2005 dataset also includes annotations for event arguments, which are a set of words or phrases that describe the event. However, in this work, we do not tackle the event argument classification and focus on event trigger detection.

The difficulty of the event trigger detection task lies in the complicated interaction between the event trigger candidate and its context. For instance, given a sentence at the end of a passage:

they are going over there to do a mission they believe in and as we said, 250 left yesterday.

It’s hard to directly classify the trigger word “

left” as an “End-Position” event or a “Transport” event because we are not certain about what the number “250” and the pronoun “they” are referring to. But if we see the sentence:

we are seeing these soldiers head out.

which is several sentences away from the former one, we now know the “250” and “they” refer to “the soldiers”, and from the clue “these soldiers head out” we are more confident to classify the trigger word “left” as the “Transport” event.

From the above, we can see that the event trigger detection task involves complex reasoning across the given context. Exisiting methods Liu et al. (2017); Chen et al. (2015); Li et al. (2013); Nguyen et al. (2016); Venugopal et al. (2014) mainly exploited sentence-level features and Liao and Grishman (2010); Zhao et al. (2018) proposed document-level models to utilize the context.

The methods mentioned above either not directly utilize the context or only process the context once while classifying an event trigger candidate. We argue that processing the context multiple times with later steps re-evaluating the context with information acquired from the previous steps improves the model performance. Such a mechanism allows the model to perform complicated reasoning across the context. As in the example, we are more confident to classify “left” as a “Tranport” event if we know “250” and “they” refer to “soldiers” in previous steps.

We utilize the dynamic memory network (DMN) Xiong et al. (2016); Kumar et al. (2016) to capture the contextual information of the given trigger word. It contains four modules: the input module for encoding reference text where the answer clues reside, the memory module for storing knowledge acquired from previous steps, the question module for encoding the questions, and the answer module for generating answers given the output from memory and question modules.


Figure 1: Overview of the TD-DMN architecture

DMN is proposed for the question answering task, however, the event trigger detection problem does not have an explicit question. The original DMN handles such case by initializing the question vector produced by the question module with a zero or a bias vector, while we argue that each sentence in the document could be deemed as a question. We propose the trigger detection dynamic memory network (TD-DMN) to incorporate this intuition, the question module of TD-DMN treats each sentence in the document as implicitly asking a question “What are the event types for the words in the sentence given the document context”. The high-level architecture of the TD-DMN model is illustrated in Figure 

1.

We compared our results with two models: DMCNNChen et al. (2015) and DEEB-RNNZhao et al. (2018) through 5-fold cross-validation on the ACE-2005 dataset. Our model achieves best score and experimental results further show that processing the context multiple times and adding implicit questions do improve the model performance. The code of our model is available online.111https://github.com/AveryLiu/TD-DMN

2 The Proposed Approach

We model the event trigger detection task as a multi-class classification problem following existing work. In the rest of this section, we describe four different modules of the TD-DMN separately along with how data is propagated through these modules. For simplicity, we discuss a single document case. The detailed architecture of our model is illustrated in Figure 2.


Figure 2: The detailed architecture of the TD-DMN model. The figure depicts a simplified case where a single document with sentences is the input to the input module and a sentence of with words is the input to the question module. The input module encodes document into a fact matrix . The question module encodes sentence into the question vector . The memory module initializes with and iteratively processes for times, at each time it produces a memory vector using fact matrix , question vector and previous memory state . The answer module outputs the predicted trigger type for each word in

using the concatenated tensor of the hidden states of the question module and the last memory state

.

2.1 Input Module

The input module further contains two layers: the sentence encoder layer and the fusion layer. The sentence encoder layer encodes each sentence into a vector independently, while the fusion layer gives these encoded vectors a chance to exchange information between sentences.

Sentence encoder layer Given document with sentences , let denotes the -th sentence in with words . For the -th word in , we concatenate its word embedding with its entity type embedding222The ACE-2005 includes entity type (including type “NA” for none-entity) annotations for each word, the entity type embedding is a vector associated with each entity type. to form the vector as the input to the sentence encoder Bi-GRUCho et al. (2014) of size . We obtain the hidden state by merging the forward and backward hidden states from the Bi-GRU:

(1)

where denotes element-wise addition.

We feed

into a two-layer perceptron to generate the unnormalized attention scalar

:

(2)

where and are weight parameters of the perceptron and we omitted bias terms. is then normalized to obtain scalar attention weight :

(3)

The sentence representation is obtained by:

(4)

Fusion layer The fusion layer processes the encoded sentences and outputs fact vectors which contain exchanged information among sentences. Let denotes the -th sentence representation obtained from the sentence encoder layer. We generate fact vector by merging the forward and backward states from fusion GRU:

(5)

Let denotes the hidden size of the fusion GRU, we concatenate fact vectors to to obtain the matrix of size by , where the -th row in stores the -th fact vector .

2.2 Question Module

The question module treats each sentence in as implicitly asking a question: What are the event types for each word in the sentence given the document as context? For simplicity, we only discuss the single sentence case. Iteratively processing from to will give us all encoded questions in document .

Let be the vector representation of -th word in , the question GRU generates hidden state by:

(6)

The question vector is obtained by averaging all hidden states of the question GRU:

(7)

Let denotes the hidden size of the question GRU, is a vector of size , the intuition here is to obtain a vector that represents the question sentence. will be used for the memory module.

Methods Fold 1 Fold 2 Fold 3 Fold 4 Fold 5 Avg
DMCNN 67.6 60.5 63.9 62.6 63.1 62.9 68.9 62.1 65.3 68.9 65.0 66.9 66.0 65.5 65.8 64.9
DEEB-RNN 64.9 64.1 64.5 63.4 64.7 64.0 66.1 64.3 65.2 66.0 67.3 66.6 65.5 67.2 66.3 65.3
TD-DMN 1-hop 67.3 62.1 64.6 65.4 61.7 63.5 72.0 60.0 65.4 66.6 68.0 67.3 68.3 65.0 66.6 65.5
TD-DMN 2-hop 69.2 61.0 64.8 64.6 63.4 64.0 64.3 66.4 65.3 68.7 65.9 67.3 68.5 65.7 67.1 65.7
TD-DMN 3-hop 66.3 63.7 64.9 66.9 60.6 63.6 68.3 64.0 66.1 67.9 66.3 67.1 70.2 64.3 67.1 65.8
TD-DMN 4-hop 66.7 63.4 65.0 61.4 65.5 63.4 66.4 66.0 66.2 64.7 69.1 66.8 70.0 63.4 66.5 65.6
Table 1: 5-fold cross-validation results on the ACE-2005 dataset. The results are rounded to a single digit. The of the last column are calculated by averaging scores of all folds.

2.3 Memory Module

The memory module has three components: the attention gate, the attentional GRUXiong et al. (2016) and the memory update gate. The attention gate determines how much the memory module should attend to each fact given the facts , the question , and the acquired knowledge stored in the memory vector from the previous step.

The three inputs are transformed by:

(8)

where is concatenation. , and are element-wise product, subtraction and absolute value respectively. is a matrix of size , while and are vectors of size and , where is the output size of the memory update gate. To allow element-wise operation, and are set to a same value . Meanwhile, and are broadcasted to the size of . In equation 8, the first two terms measure the similarity and difference between facts and the question. The last two terms have the same functionality for facts and the last memory state.

Let of size denotes the generated attention vector. The -th element in is the attention weight for fact . is obtained by transforming using a two-layer perceptron:

(9)

where and are parameters of the perceptron and we omitted bias terms.

The attentional GRU takes facts , fact attention as input and produces context vector of size . At each time step , the attentional GRU picks the as input and uses as its update gate weight. For space limitation, we refer reader to Xiong et al. (2016) for the detailed computation.

The memory update gate outputs the updated memory using question , previous memory state and context :

(10)

where is the parameter of the linear layer.

The memory module could be iterated several times with a new generated for each time. This allows the model to attend to different parts of the facts in different iterations, which enables the model to perform complicated reasoning across sentences. The memory module produces as the output at the last iteration.

2.4 Answer Module

Answer module predicts the event type for each word in a sentence. For each question GRU hidden state , the answer module concatenates it with the memory vector as the input to the answer GRU with hidden size . The answer GRU outputs by merging its forward and backward hidden states. The fully connected dense layer then transforms to the size of the number of event labels

and the softmax layer is applied to output the probability vector

. The -th element in is the probability for the word being the -th event type. Let be the true event type label for word

. Assuming all sentences are padded to the same length

, the cross-entropy loss for the single document is applied as:

(11)

where is the indicator function.

3 Experiments

3.1 Dataset and Experimental Setup

Dateset

Different from prior work, we performed a 5-fold cross-validation on the ACE 2005 dataset. We partitioned 599 files into 5 parts. The file names of each fold can be found online333https://github.com/AveryLiu/TD-DMN/data/splits. We chose a different fold each time as the testing set and used the remaining four folds as the training set.

Baselines

We compared our model with two other models: DMCNN Chen et al. (2015) and DEEB-RNN Zhao et al. (2018). DMCNN is a sentence-level event detection model which enhances traditional convolutional networks with dynamic multiple pooling mechanism customized for the task. The DEEB-RNN is a state-of-the-art document-level event detection model which firstly generate a document embedding and then use it to aid the event detection task.

Evaluation

We report precision, recall and score of each fold along with the averaged score of all folds. We evaluated all the candidate trigger words in each testing set. A candidate trigger word is correctly classified if its event subtype and offsets match its human annotated label.

Implementation Details

To avoid overfitting, we fixed the word embedding and added a 1 by 1 convolution after the embedding layer to serve as a way of fine tuning but with a much smaller number of parameters. We removed punctuations, stop words and sentences which have length less equal than 2. We used the Stanford corenlp toolkitManning et al. (2014) to separate sentences. We down-sampled negative samples to ease the unbalanced classes problem.

The setting of the hyperparameters is the same for different hops of the TD-DMN model. We set

, , , and to 300, the entity type embedding size to 50, to 300 by 600, to 600 by 1, to 1200 by 600, to 600 by 1, to 900 by 300, the batch size444In each batch, there are 10 documents. to 10. We set the down-sampling ratio to and we used Adam optimizer Kingma and Ba (2014) with weight decay set to . We set the dropoutSrivastava et al. (2014) rate before the answer GRU to and we set all other dropout rates to . We used the pre-trained word embedding from Le and Mikolov (2014).

3.2 Results on the ACE 2005 Corpus

The performance of each model is listed in table 1. The first observation is that models using document context drastically outperform the model that only focuses on the sentence level feature, which indicates document context is helpful in event detection task. The second observation is that increasing number of hops improves model performance, this further implies that processing the context for multiple times does better exploit the context. The model may have exhausted the context and started to overfit, causing the performance drop at the fourth hop.

The performance of reference models is much lower than that reported in their original papers. Possible reasons are that we partitioned the dataset randomly, while the testing set of the original partition mainly contains similar types of documents and we performed a five-fold cross-validation.

3.3 The Impact of the Question Module

To reveal the effects of the question module, we ran the model in two different settings. In the first setting, we initialized the memory vector and question vector with a zero vector, while in the second setting, we ran the model untouched. The results are listed in the table 2. The two models perform comparably under the 1-hop setting, this implies that the model is unable to distinguish the initialization values of the question vector well in the 1-hop setting. For higher number of hops, the untouched model outperforms the modified one. This indicates that with a higher number of memory iterations, the question vector helps the model to better exploit the context information. We still observe the increase and drop pattern of the for the untouched model. However, such a pattern is not obvious with empty questions. This implies that we are unable to have a steady gain without the question module in this specific task.

Methods
TD-DMN 1-hop 65.48 65.52
TD-DMN 2-hop 65.69 65.46
TD-DMN 3-hop 65.78 65.51
TD-DMN 4-hop 65.57 65.40
Table 2: The impact of the question module, indicates results with empty questions.

4 Future Work

In this work, we explored the TD-DMN architecture to exploit document context. Extending the model to include wider contexts across several similar documents may also be of interest. The detected event trigger information can be incorporated into question module when extending the TD-DMN to the argument classification problem. Other tasks with document context but without explicit questions may also benefit from this work.

5 Conclusion

In this paper, we proposed the TD-DMN model which utilizes the multi-hop mechanism of the dynamic memory network to better capture the contextual information for the event trigger detection task. We cast the event trigger detection as a question answering problem. We carried five-fold cross-validation experiments on the ACE-2005 dataset and results show that such multi-hop mechanism does improve the model performance and we achieved the best score compared to the state-of-the-art models.

Acknowledgments

We thank Prof. Huawei Shen for providing mentorship in the rebuttal phase. We thank Jinhua Gao for discussion on the paper presentation. We thank Yixing Fan and Xiaomin Zhuang for providing advice regarding hyper-parameter tuning. We thank Yue Zhao for the initial discussion on event extraction. We thank Yaojie Lu for providing preprocessing scripts and the results of DMCNN. We thank anonymous reviewers for their advice. The first author personally thank Wei Qi for being supportive when he was about to give up.

References

  • Chen et al. (2015) Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015.

    Event extraction via dynamic multi-pooling convolutional neural networks.

    In

    Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

    , volume 1, pages 167–176.
  • Cho et al. (2014) Kyunghyun Cho, Bart van Merrienboer, Çaglar Gülçehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR, abs/1406.1078.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Kumar et al. (2016) Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In

    International Conference on Machine Learning

    , pages 1378–1387.
  • Le and Mikolov (2014) Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International Conference on Machine Learning, pages 1188–1196.
  • Li et al. (2013) Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 73–82.
  • Liao and Grishman (2010) Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 789–797. Association for Computational Linguistics.
  • Liu et al. (2017) Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1789–1798.
  • Manning et al. (2014) Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60.
  • Nguyen et al. (2016) Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016.

    Joint event extraction via recurrent neural networks.

    In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309.
  • Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958.
  • Venugopal et al. (2014) Deepak Venugopal, Chen Chen, Vibhav Gogate, and Vincent Ng. 2014. Relieving the computational bottleneck: Joint inference for event extraction with high-dimensional features. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 831–843.
  • Xiong et al. (2016) Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In International Conference on Machine Learning, pages 2397–2406.
  • Zhao et al. (2018) Yue Zhao, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. 2018. Document embedding enhanced event detection with hierarchical and supervised attention. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 414–419.