An Effective System for Multi-format Information Extraction

08/16/2021 ∙ by Yaduo Liu, et al. ∙ Northeastern University 0

The multi-format information extraction task in the 2021 Language and Intelligence Challenge is designed to comprehensively evaluate information extraction from different dimensions. It consists of an multiple slots relation extraction subtask and two event extraction subtasks that extract events from both sentence-level and document-level. Here we describe our system for this multi-format information extraction competition task. Specifically, for the relation extraction subtask, we convert it to a traditional triple extraction task and design a voting based method that makes full use of existing models. For the sentence-level event extraction subtask, we convert it to a NER task and use a pointer labeling based method for extraction. Furthermore, considering the annotated trigger information may be helpful for event extraction, we design an auxiliary trigger recognition model and use the multi-task learning mechanism to integrate the trigger features into the event extraction model. For the document-level event extraction subtask, we design an Encoder-Decoder based method and propose a Transformer-alike decoder. Finally,our system ranks No.4 on the test set leader-board of this multi-format information extraction task, and its F1 scores for the subtasks of relation extraction, event extractions of sentence-level and document-level are 79.887 85.179



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Information extraction (IE) aims to extract structured knowledge from unstructured texts. Named entity recognition (NER), relation extraction (RE) and event extraction (EE) are some fundamental information extraction tasks that focus on extracting entities, relations and events respectively. However, most researches only focus on extracting information in a single format, while lacking a unified evaluation platform for IE in different formats. Thus the 2021 Language and Intelligence Challenge (LIC-2021) sets up a multi-format IE competition task which is designed to comprehensively evaluate IE from different dimensions. The task consists of a relation extraction subtask and two event extraction subtasks that extract events from both sentence-level and document-level. The definitions of these subtasks are as follows

Relation Extraction is a task that aims to extract all SPO triples from a given sentence according to a predefined schema set. The schema set defines relation P and the types of its corresponding subject S and object O. According to the complexity of object O, there are two types of schemas. The first one is the single-O-value schema in which the object O gets only a single slot and value. The second one is the schema in which the object O is a structure composed of multiple slots and their corresponding values.

Event Extraction consists of two subtasks. The first one is the sentence-level event (SEE) extraction whose aim is that given a sentence and predefined event types with corresponding argument roles, to identify all events of target types mentioned in the sentence, and extract corresponding event arguments playing target roles. The predefined event types and argument roles restrict the scope of extraction. The second one is the document-level event (DEE) extraction task, which shares almost the same definition with the previous one, but considers input text fragments in document-level rather than sentence-level, and restricts the event types to be extracted in the financial field.

Table 1 demonstrates some schema examples for these subtasks. For example, in the given multiple-O-values schema RE example, the object consists of two items: inWork and @value, but traditional RE may only get @value item in this example. Compared with traditional single-format IE task, there are following challenges in the multi-format IE task designed in LIC-2021.

First, despite the success of existing joint entity and relation extraction methods, they cannot be directly applied to the multiple slots RE task in LIC-2021 because of the multiple-O-values schema relations.

Task Schema example
RE (multiple-O-values schema)
object_type: {inWork: film and television work,
@value: role}
predicate: play
subject_type: entertainer
event_type: finance/trading-limit down
role_list: [role: time, role: stocks fell to limit]
event_type: be interviewed
role_list: [role: the name of company,
role: disclosure time,
role: time of being interviewed,
role: interviewing institutions]
Table 1: Schema examples of three information extraction subtasks in LIC-2021.

Second, the event extraction is more challenging due to the overlapping event role issue: an argument may have multiple roles. Besides, there are annotated triggers provided in the given datasets, and how to effectively use this kind of trigger information is still an open issue.

Third, it is difficult to extract different events that have the same event type in the DEE subtask. For example, for the “be interviewed” event type shown in Table 1

, different people maybe interviewed at different time. For this case, in the SEE subtask, two different time arguments and two interviewed people are regarded as arguments for the same event. However, these two arguments need to be classified into arguments of two different events in the DEE subtask.

In our system, we use some effective methods to overcome these challenges. Specifically, for the first one, we design a schema disintegration module to convert each multiple-O-values relation into several single-O-value relations, then use a voting based module to obtain the final relations. For the second one, we convert the SEE subtask into a NER task and use a pointer labelling based method to address the mentioned overlapping issue. To use the trigger information, we design an auxiliary trigger recognition module and use the multi-task learning mechanism to integrate the trigger information. For the third one, we design an Encoder-Decoder based method to generate different events.

2 Related Work

RE is a long-standing natural language processing task whose aim is to mine factual triple knowledge from free texts. These triples have the form of (S, P, O), where both S and O are entities and P is a kind of semantic relation that connects S and O. For example, a triple (Beijing, capital_of, China) can express the knowledge that

Beijing is the capital of China. Currently, the methods that extract entities and relations jointly in an end to end way are dominant. According to the extraction route taken, these joint extraction methods can be roughly classified into following three groups. (i) Table filling based methods ([3, 12]), that maintain a table for each relation, and each item in such a table is used to indicate whether the row word and the column word possess the corresponding relation. (ii) Tagging based methods ([4, 16]) that use the sequence labeling based methods to tag entities and relations. (iii) Seq2seq based methods ([19, 18]), that use the sequence generation based methods to generate the items in each triple with a predefined order, for example, first generates S, then P, then O, etc.

Event extraction (EE) can be categorized into sentence-level EE (SEE) and document-level EE (DEE) according to the input text. The majority of existing EE researches (such as [1] and  [13]) focus on SEE. Usually, they use a pipeline based framework that predicts triggers firstly, and then predict arguments. Recently, DEE is attracting more and more attentions. Usually there are two main challenges in DEE, which are: (i) arguments of one event may scatter across some long-distance sentences of a document or even different documents, and (ii) each document is likely to contain multiple events. A representative Chinese DEE work is [17], whose DEE model contains two main components: a SEE model that extracts event arguments and event triggers from a sentence; and a DEE model that extracts event arguments from the whole document based on a key event detection model and an arguments-completion strategy. Another representative Chinese DEE work is [20], which uses an entity-based directed acyclic graph to fulfill the DEE task effectively. Moreover, they redefine a DEE task with the no-trigger-words design to ease document-level event labeling. Both of these two DEE work are evaluated on Chinese financial field.

Figure 1: The architecture of our RE model.

3 Methodology

3.1 Relation Extraction

The architecture of our RE model is shown in Fig 1. We can see that it consists of two main modules: a schema disintegration module and a voting module.

Given a multiple-O-values tripleAlthough a multiple-O-values relation contains multiple objects, for simplicity, we still call it as a triple. {s, p, [(, ), …, (, )]}, the schema disintegration module would transform it into 2m-1 single-O-value triples. The concrete transformation process is as follows. First, will be taken as an object to form a triple {s, p, }. Then from on, each (, ) will form following two triples: {s, p-, } and {, , }. Here both and will be repeatedly taken as subjects in the formed triples, and p- is a new generated predicate. Accordingly, the given example will generate following triples, which are {s, p, }, {s, p-, }, {, , }, …, {s, p-, }, {, , }. If there are some single-O-value triples, they can be formed into a multi-O-value triple with a reverse process.

In the voting module, three existing state-of-the-art triple extraction models are used to extract single-O-value triples separately. Then their results would be voted to output the triples that receive more votes. Next, these obtained triples are converted into some multiple-O-values triplesNote the single-O-value triples can also be viewed as a specifical kind of multi-O-value triples. as final results of this RE task. In our system, following three existing state-of-the-art models are used for voting: TPLinker [14], SPN [10] and CasRel [16].

3.2 Sentence-level Event Extraction

In our system, we treat each event argument as an entity, and concatenate this argument’s corresponding event type and role as the entity type. Then the SEE task is converted into a NER task. But there is a multiple label issue in the SEE task: an argument may belong to multiple event types. To address this issue, here we use a pointer labeling based NER method that tags the start token and end token of an entity span for each candidate entity type. Specifically, as shown in Eq. (1

), for each entity type, the model compute two probabilities for each word to indicate the possibilities of this word being the start and end tokens of an entity that possesses this entity type.


where , are learnable parameters for the -th entity type, is the token representation for the -th word and is obtained by a pretrained language model, is the number of entity types, and is the probabilities of the -th word being the start token and the end token of an entiy that should be labeled with the -th entity type.

Furthermore, to make full use of the features from the annotated triggers, we design an auxiliary trigger recognition module that also recognizes triggers in the same NER manner as used above. This auxiliary module are trained jointly with the above argument recognition module in a multi-task learning manner.

During training, the trigger recognition module will generate a representation for each trigger. Then these trigger representations will be merged into the token representations used by the argument recognition module with an attention mechanism for the argument recognition of next iteration. But during inferencing, our model will first recognize all triggers (the results are denoted as ). Then these triggers’ representations will be fused into a unified representation (denoted as

) by a max-pooling operation. Next, an additive-attention based method is used to obtained a new token representation sequence (denoted as

), which will then be used as input for the argument recognition module. Specifically, is computed with following Eq. (2).


where is the original token representations, , , and are learnable parameters, and the superscript denotes a matrix transpose operation.

Figure 2: The architecture of our DEE model. N is the number of transformer decoder.

3.3 Document-level Event Extraction

The architecture of our DEE model is shown in Fig. 2. We can see it contains three main modules, which are Encoder, Decoder, and Output.

BERT [2] and some of its variants are used as Encoder to get a context-aware representation for each token in a document. Note that both BERT and its variants have a length restriction for the input text (usually 512 TOKENs), so we use the sliding window based method to split a long document into several segments. Then each segment is taken as an input to be processed by our DEE model and the results of these segments are combined as final results. Given a segment = , the output of Encoder is a token embedding sequence, we denote it as , where is the number of tokens in the given segment, and is the dimension of the token embedding.

In the Decoder module, some learnable query embeddings (as shown in Fig.2, denoted as ) along with the token embedding sequence outputted by the Encoder module are taken as input to predict events. Each query embedding corresponds to a specific event. Although different documents may have different numbers of events, we set a fixed number (denoted as , and is set to 16 here) for the used queries, which is obtained according to the maximal event number in a document of the training dataset.

Here our designed Decoder is similar to the one used in Transformer [11], but we delete the components of masking and positional encoding because there is no correlations and position relations between event’s query embeddings in the DEE task. Our Decoder module consists of N stacked layers, and each layer consists of a multi-head self-attention module, a multi-head inter-attention module, and a feed forward network. It will generate a refined embedding representation sequence (denoted as ) for the input queries. Specifically, the multi-head self-attention module is first performed to highlight some queries. Then, the multi-head inter-attention operation is performed to highlight the correlations between this input queries and the input sentence. Next, a feed forward network is used to generate a new embedding representation sequence for the input query. The above three steps will be performed N times to obtained .

The Output module is just the same as the extraction model for DEE, which uses a pointer labeling based method and an auxiliary trigger recognition module together to predict the event arguments. Specifically, it takes each in as input, and computes two kinds of probabilities, and , to denote the probabilities of each word being the start and end tokens of an entity that should be labeled with the -th event argument.

3.4 Loss function

The cross entropy loss function is used for both the tasks of RE and SEE, but we use a bipartite matching loss for DEE. The main difficulty in DEE is that the predicted arguments are not always in the same order as those in the ground truths. For example, an event is predicted by the

-th query embedding, but it may be in the -th position of the ground truths. Thus, we do not apply the cross-entropy loss function directly because this loss is sensitive to the permutation of the predictions. Instead, we use the bipartite matching loss proposed in SPN to produce an optimal matching between predicted events and ground truth events. Specifically, we concatenate and (see above, generated in DEE), into , that can represent the predicted events. The ground truths is denoted as . The process of computing bipartite matching loss is divided into two steps: finding an optimal matching and computing the loss function. To find optimal matching between and , we search for a permutation of elements with the lowest cost , as shown in Eq.(3).


where is the space of all -length permutations. and are outputs of the -th predicated event and the -th ground truth. is the pair-wise matching cost between the ground truth and the prediction event with index , and it is be computed as follow:


where is Hadamard product, and the optimal can be be computed in polynomial time () via the Hungarian algorithm algorithm. Readers can find more detailed introduction about this loss in SPN.

After is obtained, we change the event order of the predicated to be in line with , and the re-ordered result is denoted as . Then we use the cross entropy loss function to compute the loss between and .

3.5 Model Enhancement Techniques

In our system, some model enhancement techiques are used to further improve the performance of each subtask.

Adversarial Training To train a robust model, we use the FGM adversarial training mechanism [9] to disturb the direction of gradients, which makes the loss increase toward the maximum direction. Then the model will be pushed to find more robust parameters during the training process to weaken the impact of this aleatoric uncertainty. Assume the input text’s embedding representation sequence is , then its disturbance embedding is computed with Eq.(5).



is a hyperparameter to control the disturbance degree.

During training, will be updated with above equation after the loss of a subtask is computed. Then it will be taken as a new input of each task for the training of next iteration.

Data Augmentation We use following two kinds of data augmentation strategies to enhance the performance of our models for different tasks. The first strategy is Synonyms Replace, which first randomly selects some words from each input text, and then these words will be replaced by their synonyms that are selected from a synonym dictionary. The second strategy is Randomly Delete, under which every input word is randomly deleted with a predefined probability. But if a word to be deleted is an entity, it will be remained.

Model Ensemble We use the bagging strategy to ensemble multiple base models. It simply average or vote the weighted results of some base models. Concretely, we split all data in the training set into 10 folds, and train with 9 folds and validate with the remained 1 fold. By this way, we can obtain ten different base models. Besides, we also replace the Encoder

of different models with different pretrained language models, including BERT, RoBERTa-wwm-ext 

[7], and NEZHA [15]. Accordingly, another kinds of base models can be trained. Finally, all the base models of a task are ensembled into one model. In our system, we simply vote on the predicted results of all base models, and select the outputs that get more votes.

Dataset Training Set Development Set Test Set
DuIE2.0 171,293 20,674 21,080
DuEE1.0 11,958 1,498 3,500
DuEE-fin 7,047 1,174 3,524
Table 2: Statistics of dataset.

4 Experiments

4.1 Basic Settings

Datasets The LIC-2021 competition uses three large-scale Chinese information extraction datasets, including DuIE2.0 [5], DuEE1.0 [6] and DuEE-fin [6].

DuIE2.0 is the largest schema-based Chinese relation extraction dataset, which contains more than 430,000 triples over 210,000 real-world Chinese sentences, restricted by a pre-defined schema of 48 relation types. In this dataset, there are 43 single-O-value relations and 5 multiple-O-values relations. DuEE1.0 is a Chinese event extraction dataset, which consists of 17,000 sentences containing 20,000 events of 65 event types. DuEE-fin is a document-level event extraction dataset in the financial domain. The dataset consists of 11,700 documents across 13 event types, in which there are negative samples that do not contain any target events. All of these datasets are built by Baidu. Some basic statistics of these datasets are shown in Table 2.

Evaluation Metrics

According to the settings of LIC-2021, the scores of a participating system on DuIE2.0, DuEE1.0 and DuEE-fin are given respectively, and the macro average is used as the final score of the system. F1 score is used as the evaluation metrics for all three subtasks. For the RE subtask, a predicted relation with multiple-O-values schema would be regarded as correct only when all its slots are exactly matched with a manually annotated golden relation. For the SEE subtask, a predicted event argument is evaluated according to a token level matching F1 score. If an event argument has multiple annotated mentions, the one with the highest matching F1 will be used. For the DEE subtask, for each document, the evaluation would will first select a most similar predicted event for every annotated event (Event-Level-Matching). The matched predicted event must have the same event type as the annotated event and predict the most correct arguments. Each predicted event is matched only once.

RE F1 score SEE F1 score DEE F1 score
CasRel 0.7568 Backbone 0.8254 Backbone 0.6630
TPLinker 0.7605 Backbone+TR 0.8402 Backbone+TR 0.6843
SPN 0.7324 Backbone+BML 0.6702
Ens 0.7894 Ens 0.8509 Ens 0.7089
Ens+AT 0.7905 Ens+AT 0.8535 Ens+AT 0.7134
Ens+AT+PLM 0.8051 Ens+AT+PLM 0.8594 Ens+AT+PLM 0.7235
Table 3: Models’ performance in relation extraction, sentence-level event extraction and document-level event extraction.

Implementation Details AdamW [8]

is used to train all the models . The learning rates for RE, SEE, and DEE are 3e-5, 5e-5, and 5e-5 respectively. And the epochs for these three subtasks are 10, 40, and 30 respectively.

4.2 Results

Main Results The main experimental results are shown in Table 3, in which Backbone, TR, BML, Ens, AT, and PLM to denote the backbone models, the trigger recognition module, the bipartite matching loss, the ensemble model, the adversarial training, and the pretrained language models respectively. In all of our experiments, Backbone refers to the model in which the transformer decoder and the bipartite matching loss are removed.

From Table 3 we can see that our RE model achieves far better results than all the compared state-of-the-art triple extraction models like CasRel, TPLinker, and SPN. Here we donot compare our two EE models with other state-of-the-art models because most of existing EE models cannot be used here directly.

Ablation Results From the ablation results in Table 3 we can see that the pretrained language models are much helpful for all subtasks and they always bring a significant performance gain for a subtask. Besides, the adversarial training is also helpful and it consistently improves the performance of a subtask.

From the results of both SEE and DEE we can see that the designed trigger recognition module plays a very important role to the performance, and it improves nearly 1.5% point for the F1 score of SEE, and more than 2.1% point for the F1 score of DEE. In fact, this module plays even more roles than the pretrained language models. Based on these results we can conclude that the triggers do contain some important cues for both kinds of EE subtasks, and making full of these cues are much helpful for improving the performance.

From the results of DEE we can see that the bipartite matching loss also plays a helpful role to improve the performance. And it brings more performance gain than the adversarial training.

Besides, we also conduct experiments on the DEE subtask to evaluate: (i) the impact of the layer number of the transformer-based decoder (the number in Fig.2) , and (ii) the impact of different sizes of sliding windows. For the first one, we test different numbers in the range of 0 to 5. Our experiments show that when the number of decoder layer is set to 3, the model achieves the best results. We think this is mainly because that a moderate layer number will lead to more complete integration of input information into event queries, while a larger or smaller number will make the model be overfit or underfit. For the second one, we set the size of sliding windows to 128, 256, and 512. And their F1 scores are 64.8%, 66.9%, and 69.2% respectively when the number of the decoder’s layer is set to 3 and the bipartite matching loss is used. These results show that usually, the performance would increase as the size of the sliding window increases. This is because that more context features can be taken into consideration with a larger sliding windows size.

5 Conclusions

This paper describes our system to the LIC-2021 multi-format IE task. We use different methods to overcome different challenges in this competition task, including the schema disintegration method for the multiple-O-values schema issue in the RE subtask, the multi-task learning method for the SEE subtask, and the Transform-alike decoder for the DEE subtask. Experimental results show that our system is effective and it ranks No.4 on the final test set leader-board of this competition. Its F1 scores are 79.887% on DuIE2.0, 85.179% on DuEE1.0, and 70.828% on DuEE-fin respectively.

However, there is still plenty of room for improvement, and lots of work should be further explored. First, in the RE subtask, many triples are not annotated. These triples gives model wrong supervisions, which is very harmful for the performance. But this missing annotation issue is still an open issue, and should be further explored. Second, in the DEE subtask, how to process long text is still a challenge that is worthy being further studied. In addition, if two arguments of one event are far away in the given text (either a sentence or a document), it would be difficult to extract them correctly. This issue also should be well studied in the future.


This work is supported by the National Key R&D Program of China (No.2018YF C0830701), the National Natural Science Foundation of China (No.61572120), the Fundamental Research Funds for the Central Universities (No.N181602013 and No.N171602003), Ten Thousand Talent Program (No.ZX20200035), and Liaoning Distinguished Professor (No.XLYC1902057).


  • [1] Y. Chen, L. Xu, K. Liu, D. Zeng, and J. Zhao (2015-07)

    Event extraction via dynamic multi-pooling convolutional neural networks

    In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Beijing, China, pp. 167–176. Cited by: §2.
  • [2] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019-06) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. Cited by: §3.3.
  • [3] P. Gupta, H. Schütze, and B. Andrassy (2016-12)

    Table filling multi-task recurrent neural network for joint entity and relation extraction

    In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan, pp. 2537–2547. Cited by: §2.
  • [4] T. Hang, J. Feng, Y. Wu, L. Yan, and Y. Wang (2021-03) Joint extraction of entities and overlapping relations using source–target entity labeling. Expert Systems with Applications 177, pp. 114853. Cited by: §2.
  • [5] S. Li, W. He, Y. Shi, W. Jiang, H. Liang, Y. Jiang, Y. Zhang, Y. Lyu, and Y. Zhu (2019) DuIE: a large-scale chinese dataset for information extraction. In CCF International Conference on Natural Language Processing and Chinese Computing, pp. 791–800. Cited by: §4.1.
  • [6] X. Li, F. Li, L. Pan, Y. Chen, W. Peng, Q. Wang, Y. Lyu, and Y. Zhu (2020) DuEE: a large-scale dataset for chinese event extraction in real-world scenarios. In CCF International Conference on Natural Language Processing and Chinese Computing, pp. 534–545. Cited by: §4.1.
  • [7] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019-07) RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv e-prints, pp. arXiv:1907.11692. External Links: 1907.11692 Cited by: §3.5.
  • [8] I. Loshchilov and F. Hutter (2017-11) Decoupled Weight Decay Regularization. arXiv e-prints, pp. arXiv:1711.05101. External Links: 1711.05101 Cited by: §4.1.
  • [9] T. Miyato, A. M. Dai, and I. J. Goodfellow (2017) Adversarial training methods for semi-supervised text classification. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, Cited by: §3.5.
  • [10] D. Sui, Y. Chen, K. Liu, J. Zhao, X. Zeng, and S. Liu (2020-11) Joint Entity and Relation Extraction with Set Prediction Networks. arXiv e-prints, pp. arXiv:2011.01675. External Links: 2011.01675 Cited by: §3.1.
  • [11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30, pp. . Cited by: §3.3.
  • [12] J. Wang and W. Lu (2020-11) Two are better than one: joint entity and relation extraction with table-sequence encoders. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 1706–1721. Cited by: §2.
  • [13] X. Wang, X. Han, Z. Liu, M. Sun, and P. Li (2019-06) Adversarial training for weakly supervised event detection. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 998–1008. Cited by: §2.
  • [14] Y. Wang, B. Yu, Y. Zhang, T. Liu, H. Zhu, and L. Sun (2020-12) TPLinker: single-stage joint extraction of entities and relations through token pair linking. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain (Online), pp. 1572–1582. Cited by: §3.1.
  • [15] J. Wei, X. Ren, X. Li, W. Huang, Y. Liao, Y. Wang, J. Lin, X. Jiang, X. Chen, and Q. Liu (2019-08) NEZHA: Neural Contextualized Representation for Chinese Language Understanding. arXiv e-prints, pp. arXiv:1909.00204. External Links: 1909.00204 Cited by: §3.5.
  • [16] Z. Wei, J. Su, Y. Wang, Y. Tian, and Y. Chang (2020-07) A novel cascade binary tagging framework for relational triple extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, pp. 1476–1488. Cited by: §2, §3.1.
  • [17] H. Yang, Y. Chen, K. Liu, Y. Xiao, and J. Zhao (2018-01) DCFEE: a document-level chinese financial event extraction system based on automatically labeled training data. pp. 50–55. Cited by: §2.
  • [18] D. Zeng, H. Zhang, and Q. Liu (2020) CopyMTL: copy mechanism for joint extraction of entities and relations with multi-task learning. In

    The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020

    Cited by: §2.
  • [19] X. Zeng, D. Zeng, S. He, K. Liu, and J. Zhao (2018-07) Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 506–514. Cited by: §2.
  • [20] S. Zheng, W. Cao, W. Xu, and J. Bian (2019-11) Doc2EDAG: an end-to-end document-level framework for Chinese financial event extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 337–346. Cited by: §2.