Open Information Extraction (OpenIE) aims to extract structured facts in the form of triples from sentences Etzioni et al. (2008). For example, given a sentence "Tokyo, the capital of Japan, is also the most populous city in Japan.", an OpenIE system is expected to extract the following triples: (Tokyo; capital of; Japan) and (Tokyo; most populous city in; Japan). The different parts of the triple are also referred to as subject, relation and object, respectively. OpenIE extractions are human understandable intermediate representations of facts in source texts Mausam (2016), which are useful in a variety of information extraction end tasks such as summarization Christensen et al. (2013), question answering Khot et al. (2017) and automated schema extraction Nimishakavi et al. (2016).
OpenIE systems largely come in two flavors, (1) unsupervised OpenIE systems that use fine grained rules based on dependency parse trees Del Corro and Gemulla (2013); Gashteovski et al. (2017); Lauscher et al. (2019), and (2) supervised neural OpenIE systems, trained end-to-end with large training datasets Stanovsky et al. (2018); Ro et al. (2020); Kolluru et al. (2020a)
. The majority of studies indicate that neural OpenIE systems perform better than rule-based systemsBhardwaj et al. (2019); Ro et al. (2020) on English language text. However, a recent study, Gashteovski et al. (2021), challenges these results and and demonstrates that traditional dependency parse based methods perform better. This motivates our work of unifying these two paradigms. Additionally, supervised neural systems require a lot of training data which is not available for specialized domains such as legal, finance and biomedical domains. Indeed such domains also require high precision extractions which are typically extracted using rule based systems because of the inability to train a supervised system due to a lack of training data. This problem is even more acute for languages other than English due to the lack of resources for training neural models and the need for expert linguistics for writing sophisticated rules for triple extraction. It is these very problems that we attempt to address using a novel iterative multilingual open information extraction system that combines the best of both worlds, the rule based and neural paradigm. We term this systems as MiLIE.
We propose MiLIE, MultiLingual (Iterative) Information Extraction, a system that seamlessly integrates supervised neural end-to-end prediction with linguistic rules for extracting OpenIE triples in multiple languages.
Although MiLIE can be used as a purely neural OpenIE system, it can also be supplemented with linguistic rules, but these rule based systems need neither be exhaustive nor complete, i.e., such systems need not predict the entire triple but only part of the triples such as the predicate, subject or object. MiLIE can use such incomplete extractions and predict the remaining parts of the triple. Therefore MiLIE can be a boon to existing applications wishing to transition from a rule based information extraction system to a neural information extraction because MiLIE would allow them to do so without throwing away their rule based system.
We demonstrate that our system of iterative extraction performs much better compared to current systems in the multilingual setting which includes diverse languages from Chinese to Galician. Additionally we show that MiLIE also meaningfully combines diverse extraction pathways for obtaining a rich set of triples. Further analysis uncovers useful insights on how different languages either make it easy or difficult for OpenIE systems to extract individual elements of the triple. We evaluate MiLIE on both triple extraction as well as the n-ary extraction task, Ro et al. (2020), which involves extracting arguments connected with the object.
Our contributions are summarized as follows:
We propose MiLIE, a multilingual iterative OpenIE system that can seamlessly integrate knowledge from any other neural or rule based information extraction systems for improving performance.
We perform analysis based on ablation studies and uncoverinteresting insights about the nature of information extraction tasks in different languages.
Extensive experiments on a variety of languages including English demonstrate that MiLIE outperforms recent SOTA systems by a wide margin, especially on languages other than English.
2 Related Work
Since the introduction of the first OpenIE system, Textrunner Etzioni et al. (2008), the area has seen a flurry of work utilizing feature based pattern learners Fader et al. (2011); Etzioni et al. (2011); Christinsen2011; Mausam et al. (2012); Saha et al. (2017) to unsupervised rule based systems using dependency trees Del Corro and Gemulla (2013); Angeli et al. (2015); Gashteovski et al. (2017); Saha and Mausam (2018), to the recent supervised neural Stanovsky et al. (2018); Roy et al. (2019); Zhan and Zhao (2020) and transformer based systems Ro et al. (2020); Kolluru et al. (2020b, a).
The recent trend of end-to-end neural supervised OpenIE systems is possible thanks to availability of large scale training data in English either available by filtering outputs of unsupervised OpenIE systems, Roy et al. (2019) or by automatically transforming outputs of QA-SRL systems to OpenIE triples, Stanovsky et al. (2018); Zhan and Zhao (2020); Ro et al. (2020). Stanovsky et al. (2018)
trains a Recurrent Neural Network (RNN) based architecture using the QA-SRL data.Roy et al. (2019) use an ensemble model for obtaining training data from several unsupervised rule based systems for training a RNN model. Such models work by casting OpenIE as a sequence tagging task where each token is tagged as subject, predicate or object using a BIO like tagging scheme. In contrast, SpanOIE, Zhan and Zhao (2020), instead of tagging tokens, first extracts the predicate by predicting its token spans and then extracts arguments again using spans by for each extracted predicate. Another approach of extracting triples is by generating them. Sun et al. (2018); Kolluru et al. (2020b) use a sequence-2-sequence model with copy mechanism for generating triples. Kolluru et al. (2020b) use a BERT Devlin et al. (2019) encoder with an LSTM decoder which iteratively extracts triples, i.e., it extracts a complete triple and adds the extracted triple is then marked in the sentence using [SEP] for extracting the next triple. This is done to prevent extracting redundant triples. This decoding is different than MiLIE which iteratively extracts individual elements of a triple and does not add the entire triple to the sentence for avoiding extracting redundant triples. Additionally unlike IMOJIE, MiLLIE does not generate the extractions, but uses a tagging scheme which eliminates the need for a learnable copy mechanism.
Recently Kolluru et al. (2020a) proposed the OpenIE6 model, a supervised OpenIE model with novel iterative grid labelling scheme using a BERT model trained with linguistic constraints. Similar to IMOJIE, OpenIE6 also extracts a complete triple and iterates it through a self-attention mechanism by adding the extracted label embeddings to the sentence embeddings for obtaining additional triples. Moreover it also uses linguistic constraints along with CALM Saha and Mausam (2018), a coordination analyzer, for improving performance. Such lingusitic constraints cannot be readily ported to other languages, consequently OpenIE 6 is only evaluated on English. To address multilinguality, recently Ro et al. (2020)
proposes, Multi2OIE, a neural model that leverages a pretrained multilingual BERT model for transfer learning. Multi2OIE uses BIO tagging to first extract predicates and for each predicate, extracts the arguments connected to the predicate using the predicate embedding and a self-attention mechanism. LikeMiLLIE, Multi2OIE only uses a pretrained BERT model for transfer learning, however unlike MiLIE, it cannot integrate simple rule based output for further improving multilingual extraction and nor can it extract the triple using different decoding pathways.
The iterative nature of our OpenIE system was motivated by a simple question —Is it easier to extract some elements of a triple, say predicates, compared to others, like subjects, for certain type of sentences. Also does this vary with different languages? For example consider the sentence "Barrack Obama became the US President in the year 2008" which contains two triples (Barrack Obama; became; US President) and (Barrack Obama; became US President in; 2008). Extracting the predicate, "became US President in", for the second triple is tricky, because the object of the first triple (US President) overlaps with the predicate of the second triple. But if the extraction system was provided with the object, (2008), and asked to extract a triple conditioned on this object, then the predicate extraction would be easier. Likewise there are other sentence constructions where first extracting objects or subjects might be beneficial. We speculate that iteratively extracting a triple by conditioning on different elements of the triple could lead to richer extractions and indeed this is our hypothesis; combining extracted triples using different extraction patterns from a single sentence improves recall, especially for zero shot multilingual tasks. Motivated by this, we propose MiLIE.
We formally describe the task in the following section and the training procedure in section 3.2.
3.1 Iterative Prediction
|Likelihood function||Input Sentence||Target|
|The Taj Mahal was built by Shah Jahan in 1643||built by|
|The Taj Mahal was <P>built by<P> Shah Jahan in 1643||Taj Mahal|
|The <S>Taj Mahal<S> was <P>built by<P> Shah Jahan in 1643||Shah Jahan|
|The <S>Taj Mahal<S> was <P>built by<P> <O>Shah Jahan<O> in 1643||in 1643|
|The <S>Taj Mahal<S> was built by <O>Shah Jahan<O> in 1643.||built by|
|The Taj Mahal was built by <O>Shah Jahan<O> in 1643.||Taj Mahal|
|The Taj Mahal was built by Shah Jahan in 1643.||Shah Jahan|
The transformer model expects a sentence in the form of a sequence of words (tokens) along with either their dependency tags or part of speech tags. Let , be the tokens and corresponding tags provided as inputs to the transformer model where is the maximum number of tokens. We use a language specific dependency tagger for obtaining the tags. We target low resource languages, but such languages are low resource for OpenIE task but could be high resource for other tasks such as PoS tagging or dependency parsing especially due to the introduction of universal dependencies Nivre et al. (2016). The task is to extract the subset of tokens which belong to a triple. To do so, we use the BIO tagging scheme. MiLIE consist of four output heads, where each head is in charge of predicting subject, object, predicate and argument, respectively. For this, each output head outputs a label for token where . Fig, 1 illustrates the multi output head architecture. The output heads use the final transformer hidden state and predict labels denoted by where .
The order in which the different triple parts are extracted can be varied. This allows us to investigate the challenge of extracting triple elements in specific order on different languages. Additionally different pathways aid different kinds of extractions and combining them results in a richer set of extractions. Choosing a particular order defines a decoding pathway as a sequence of output heads where . For example, the decoding pathway denotes a sequence of output functions . Crucially, each output head is conditioned on the input and the output labels extracted by the previous function. This feature allows MiLIE to seamlessly integrate rule based systems with neural systems since the conditioning can be also done on extractions obtained from rule based systems. The output labels of the previous function are added to the input tokens as shown in Fig. 2.
During training we minimize different log-likelihood functions that ensure the model learns to predict conditioned on previous extractions. The log likelihood functions minimized during training depend on the training instances, therefore each training instance may optimize a different log likelihood function. We describe the log likelihood functions along with a few example of the training instances in Table 1. During training we use random sampling, described in section 3.2, for ensuring that each likelihood function is minimized with enough training instances.
During prediction time along with the input sentence, the model also expects extractions predicted by the previous iterations. To provide this information we add special symbols to the sentence that explicitly mark the previous extractions in the sentence. For example, we surround predicate with the symbols <P>, subject with <S> and object with <O>. For example, for predicting the object given the predicate extracted from previous iteration, the extracted predicate is marked in the sentence using the <P> symbol and the sentence is consequently passed through the transformer for predicting the object using the object head.
We always extract the arguments at the last iteration therefore we do not mark the arguments in the sentence. There are two reasons for doing so, one is the added computational cost due to considering all possible permutations of the argument order and second is our preliminary experiments which suggested that arguments are best predicted after rest of the triple is predicted.
For effectively extracting different elements of the triple conditioned on different elements, the model needs to see such combinations during training. However enumerating all possible combinations exhaustively is prohibitively expensive. We propose a sampling technique that ensures that the model sees all possible combinations of a target and prior extractions. This is done by creating a training set that simulates a prior extraction and forces the model to predict the next extraction.To ensure that the number of training data points does not explode we randomly sample one order for each training instance. Some elements in the triple are marked in the sentence while others are used as target labels. Note that we allow for multiple instances of the target labels, however there is only one instance of the marked element. For example, given one subject the target could be multiple predicates. This procedure trains the model to predict an appropriate label conditioned on a variety of previous predictions.
We train and evaluate MiLIE
on an NVIDIA Titan RTX with 24 GB RAM. The training is done for a maximum of two epochs and each epoch takes about 9 hours to complete. The maximum sentence length using the English train and validation dataset is found to be about 100. Due to the addition of extracted triple element markers we allow a slack of 20 tokens, thus fixing the maximum sentence length to 120. We use a maximum possible batch size that fits inside the GPU, which results in batch size of 192. We use an ADAM SGD optimizer with linear warmup and tune the learning rate and warmup percentage.
|English||Ro et al. (2020) Translation||Error Explanation|
|The stock pot should be chilled and the solid lump of dripping which settles when chilled should be scraped clean and re-chilled for future use.||La olla de caldo debe ser enfriado y la masa sólida de goteo que se asienta cuando [se] enfriada se debe raspar limpio y re-enfriada para uso futuro.||
|However, StatesWest isn’t abandoning its pursuit of the much-larger Mesa.||Sin embargo, StatesWest no abandona su búsqueda de la tan - Mesa grande.||<tan - Mesa grande>: syntactically and semantically incorrect.|
|The rest of the group reach a small shop, where Brady attempts to phone the Sheriff, but the crocodile breaks through a wall and devours Annabelle.||El resto del grupo llega a una pequeña tienda, donde Brady intentos de teléfono del Sheriff, pero los saltos de cocodrilo a través de una pared, y devora a Annabelle.||
3.3 Negative Sampling
Iterative prediction is prone to error amplification, i.e. if an error is made during the first iteration then the error propagates and affects subsequent extractions. Anticipating this we train MiLIE to recognize extraction errors made in the previous iteration. To this end we augment the training data with corrupted data points containing incorrectly marked extractions. For each of the incorrect extractions the model is trained to predict blank extraction, i.e., predicting the outside label for all tokens. We use a similar sampling procedure described in the previous section. For every training data point from a fixed number of training data points, we create two negative samples, one simulating conditioning on one incorrect extraction and the other for two incorrectly extracted elements of the triple. For each negative sample we corrupt the extraction using three techniques: (1) corrupting the predicates by replacing them with randomly chosen tokens from the sentence, (2) corrupting the subject and object by exchanging them, and (3) by mismatching the subject object pairs from different triples.
3.4 Aggregating Decoding Pathways
Fixing the n-ary argument extraction in the final iteration we obtain the following six decoding pathways- . For a given decoding pathway say, , In this case all the predicates are extracted first, then for each predicate, subjects are extracted, then for each predicate subject pair objects are extracted and finally for every extracted predicate, subject, object tuple all the n-ary arguments are extracted. This extraction procedure preserves the relationships between the extracted elements resulting in correctly extracting multiple triples. Fig. 2 illustrates this procedure.
We hypothesize that some triples are easier to predict if extracted predicate first while others may be easily obtained using subject first extraction. And this can vary with different languages. This also means that some decoding pathways are more error prone than others. Therefore aggregating triples using different decoding pathways may improve recall.
We propose a simple algorithm we term as Water Filling (WF) for aggregating the extractions. This is inspired by the power allocation problem in communication engineering literature Kumar et al. (2008). Imagine a thirsty person with access to different pots of water with varying levels of purity and with the caveat that the amount of water varies is inversely proportional to the purity. The natural solution is to first drink the high purity water and move on to the pots in decreasing level of purity until the thirst is quenched. We use the same idea. Treating each decoding pathways as an expert, we assume that the triples extracted by all 6 pathways are more accurate compared to those extracted by only 5 pathways, 4 pathways and so on. This can be thought of as triples obtaining votes from experts. Starting with an empty set, for each sentence we start adding triples to the set in the order of decreasing number of received votes. The normalized votes a triple receives is used as the confidence value of the triple.
Although the procedure is explained in a sequential manner it can be parallelized by running all 6 decoding pathways in parallel by replicating the model weights. After obtaining the triples form each decoding pathway in parallel they can be combined in a sequential manner.
3.5 Binarizing n-ary Extractions
We train MiLIE using RE-OIE206, which is an n-ary OpenIE dataset. In addition, we would also like to evaluate MiLIE on a binary triple extraction dataset, i,e, with only (subject, predicate, object) extractions. One simple way to convert the n-ary extractions to binary extraction is to ignore the n-ary arguments. However, this will lead to a decrease in recall because the n-ary arguments may not be part of other extracted triples due to the initial n-ary extraction. Another method is to simply treat the extracted n-ary arguments as objects to the same subject, predicate pair. This would ensure that the extracted arguments are not dropped, however this may result in drop of precision since the n-ary argument may not attach to the same predicate. For example, consider the extraction (Barrack Obama; became; US President; in the year 2008). Treating n-ary arguments as objects results in (Barrack Obama; became; US President) and (Barrack Obama; became; in the year 2008).
The iterative nature of MiLIE allows us to elegantly address the problem of converting n-ary extractions into a binary format. We treat the extracted n-ary arguments as hypothesized objects. We then and provide the extracted subject, hypothesized object pair to the model, which then extracts a new predicate conditioned on the previously extracted subject and the hypothesized object, i.e., . This creates a possibility of extracting the correct predicate, something that is not possible with existing n-ary OpenIE systems.
3.6 Integrating Linguistic Rule based systems
The iterative nature of prediction allows us to predict parts of a triple conditioned on any other part of the triple. For example, if a linguistic rule based system works well for extracting objects, MiLIE can complete the missing parts of the triple conditioned on the objects. We simulate this by using ClausIE to extract objects and then use MiLIE for extracting the corresponding subject and predicate conditioned on the ClausIE objects.
We treat the output of the rule based system as potential objects paired with subjects and extract the predicate connecting them. If the rule based extraction is incorrect, then MiLIE can detect the error and extract nothing. This results in more accurate extractions compared to simply post-processing the extracted tokens using linguistic rules.
|MinIE - DEP||19.25||19.82||18.71||8.42||11.28||6.72|
|MiLIE - NS||17.32||19.64||15.49||10.28||14.33||8.01|
|MiLIE - Bin||20.04||22.0||18.41||8.98||13.54||6.72|
4.1 Datasets and Evaluation
We use the RE-OIE2016 training dataset used in Ro et al. (2020) and introduced by Zhan and Zhao (2020). This training dataset contains n-ary extractions allowing MiLIE to be evaluated on both n-ary as well as binary extraction benchmarks. We use the CaRB benchmark introduced in Bhardwaj et al. (2019) for evaluating English OpenIE n-ary extraction. However, the CaRB benchmark also suffers from serious shortcomings.
Gashteovski et al. (2021) discovered that a simple OpenIE system that breaks the sentence into triple at the verb boundary achieves recall and precision. This is a problem because it indicates that simply adding extraneous words to the extraction results in improved recall. For this purpose we also evaluate on BenchIE, an exhaustive fact based multilingual OpenIE benchmark proposed by Gashteovski et al. (2021). The BenchIE evaluation benchmark evaluates explicit binary extractions in English, Chinese and German. The authors also provide an annotation tool, AnnIE Friedrich et al. (2021) for extending the benchmark to additional languages. We used the AnnIE annotation tool for creating BenchIE style benchmarks for Japanese and Galician with the help of native Japanese and Galician speakers.
Additionally we also evaluate MiLIE on the Spanish and Portuguese multilingual datasets introduced in Ro et al. (2020), using the lexical match evaluation strategy. The lexical match evaluation strategy was introduced in Stanovsky et al. (2018) and its numerous shortcomings are discussed in Bhardwaj et al. (2019). Although problematic, we still use the lexical match procedure for fair comparison with the Multi2OIE baseline. Ro et al. (2020) translated sentences and n-ary extractions from the CaRB test set to Spanish and Portuguese using the Google Translate API. We managed to obtain 100 random samples from these test sets evaluated by native Spanish and Portuguese speakers. To our surprise we discovered that around 70 percent of the sentence or extraction translations were inaccurate. Table 2 shows a few examples of the incorrect translations.
For an accurate and clean comparison with Multi2OIE we also cleaned up part of the Spanish test set by re-translating about 100 sentences and their extractions in Spanish. These translations were done by native Spanish speakers. In summary, we evaluate MiLIE on (1) CaRB and BenchIE benchmarks for English, (2) noisy Spanish and Portuguese datasets from Ro et al. (2020) along with the Spanish dataset cleaned by us, and finally (3) BenchIE datasets in German and Chinese introduced by Gashteovski et al. (2021) in addition to BenchIE syle datasets in Japanese and Galician introduced in this paper. In total we evaluate MiLIE on English and six other low resource languages.
For hyperparameter tuning we use theCaRB English validation set and use the F1 scores obtained using the CaRB evaluation procedure for comparing models with different hyperparameters.
We compare MiLIE with both unsupervised and supervised baselines. Specifically we compare MiLIE with ClausIE, MinIE, Stanford-OIE, RNN-OIE, OIE6 and Multi2OIE on English. Out of all these systems only Multi2OIE is capable of extracting triples from multiple languages. We did not find any other neural multilingual OpenIE system in the literature capable of extracting triples in the languages studied in this paper. Therefore we compare with only the Multi2OIE system on languages other than English. Evaluation on languages other than English is always zero-shot, i.e., the model is trained using only the English Re-2016 dataset and validated on the English CaRB dataset.
The MiLIE model is trained using negative sampling, includes the dependency tag information. For BenchIE, MiLIE
uses the binarization function described in Section3.5, but for CaRB and lexical match it does not do so because they evaluate n-ary extraction. On the CaRB English benchmark we use results for baselines reported in Ro et al. (2020) and Kolluru et al. (2020a). For evaluating on BenchIE, we run all the baselines on the BenchIE English evaluation benchmark. For multilingual BenchIE we train Multi2OIE using the code and hyperparameters supplied in the paper for evaluation on multilingual BenchIE.
In Table 3, we compare MiLIE with Multi2OIE (M2OIE) on the multilingual BenchIE evaluation benchmark. MiLIE performs significantly better compared to Multi2OIE on all the languages on the zero shot setting. For German language both Multi2OIE and BenchIE perform much worse compared to other languages, even though German is closer to English compared to Chinese and Japanese. The reason behind this is due to the presence of separable prefixes in German verbs which cannot be extracted using BIO tags. The BIO tagging scheme assumes continuity of phrases which is not present for most German verbs present in predicates, resulting in extremely low recall. Ablation results also indicate the usefulness of adding the dependency tags as well as the use of negative sampling.
Table 4 shows that MiLIE performs better than Multi2OIE on the noisy Spanish and Portuguese test as well as the cleaned Spanish test set. Note that the numbers are higher due to the lexical matching evaluation procedure.
In Table 5, we compare MiLIE with several unsupervised and supervised baselines in English using CaRB and BenchIE dataset and evaluation algorithm. MiLIE performs much better compared to other neural baselines on BenchIE compared to CaRB. The reason behind this is that CaRB punishes compact extractions while rewarding overly long extractions Gashteovski et al. (2021).
4.2.3 Hybrid OpenIE
MiLIE can easily integrate any rule based system that extracts even a part of the triple. To evaluate this we first simulate a system that only extracts the object and use MiLIE to extract other parts of the triple. We do this in two ways, (1) we use the ClausIE system for extracting triples for the BenchIE English data and only use the object, discarding the rest of the triple, and (2) we use objects from the BenchIE English test set and use MiLIE to complete rest of the triple. The first part evaluates how well MiLIE can complete a triple when combined with a good but error prone object extraction system. And the second evaluates how well MiLIE completes a triple when given an accurate extraction.
We use the Multi2OIE system as a baseline, where we extract triples using Multi2OIE and eliminate those triples with objects that are either not extracted by ClausIE or the gold objects from BenchIE. The reason behind the choice of selecting object extraction from ClausIE or BenchIE gold is the fact that neural systems are not good at extracting objects Kolluru et al. (2020a). This is also seen from additional experiments detailed in Section 5. By combining objects extracted from rule based system and other elements extracted from a neural system we wish to investigate whether its possible to obtain the best of both world.
Table 6 indeed confirms that combining rule based object extraction with neural MiLIE improves performance. Note that we used ClausIE as method to extract objects for studying hybrid systems, this is not an attempt to fuse ClausIE with MiLIE.
|MiLIE + Object||0.2971||0.3235||0.2748|
We hypothesized that it is the ability of MiLIE to extract triples using different extraction patterns that results in improved performance on multilingual data. We test this hypothesis with a simple experiment. We compare MiLIE with the water filling aggregation with MiLIE using different decoding pathways. Additionally we also compare with a dynamic decoding scheme where MiLIE chooses a decoding pathways based on the sentence. To do this we split a part of the English training set and for each sentence in the split we record the extraction pathway that provides the best F1 scoreMiLIE as per CaRB
evaluation. We then use this as training data for training another mBERT model which classifies each sentence in one of the six classes where each class represents an extraction pathway. While creating the training split we found that for most sentences the extraction pathresults in the best F1 score which unsurprisingly also translates during test time.
Table 7 details the performance for different decoding schemes. All the decoding schemes except WF, use only one decoding pathway while WF combines multiple pathways. As can be seen, WF performs much better on all languages, even better than the dynamic decoding. This shows that combining triple extraction from multiple pathways is better than dynamically choosing a pathway. This confirms our hypothesis that one can extract richer set of triples if one extracts triples repeatedly from the same sentence using multiple extraction pathways.
also provides an interesting insight, predicate first extraction seems to be the best, followed by subject first and then object first extraction. This probably happens because predicates are easier to extract leading to lesser number of errors propagated in the chain. In general, we can conclude that predicates are easier to extract than subjects and subjects are easier to extract than objects. We suspect that this could be due to differences in linguistic variability among the predicate, subject and object. To test our hypothesis we measured the entropy of the distribution of dependency and part-of-speech tags in the predicate, subject and object elements in the BenchIE English and the multilingual test sets. Although this is not a measure of linguistic complexity, results shown in Table8 do suggest that linguistic complexity of objects is higher than those of predicates and subjects. However, additional analyses are required to understand the reasons behind why objects are more difficult to extract compared to predicates and subjects.
In this paper, we introduced MiLIE a multilingual OpenIE system that extracts triples by extracting parts of the triple iteratively. MiLIE allowed us to answer the question of how different sentence constructions and languages affect the difficulty of extracting different elements of a triple. We found that it is indeed true that certain triples are easier to exploit if the predicate is extracted first while other triples can be easily extracted with subject first extraction. We were able to exploit such variations by aggregating extractions from multiple extraction strategies resulting in improved performance. We also demonstrated how MiLIE can be combined seamlessly with rule based systems for improving performance, especially in the multilingual setting. Although our experiments were focused on the OpenIE task, we believe that the insights gained can be translated to other information extraction tasks where extractions are coupled to each other. We plan to explore such connections in the future.
- Angeli et al. (2015) Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging Linguistic Structure For Open Domain Information Extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 344–354.
Bhardwaj et al. (2019)
Sangnie Bhardwaj, Samarth Aggarwal, and Mausam Mausam. 2019.
Crowdsourced Benchmark for Open IE.
Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6263–6268.
- Christensen et al. (2013) Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2013. Towards coherent multi-document summarization. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1163–1173, Atlanta, Georgia. Association for Computational Linguistics.
- Del Corro and Gemulla (2013) Luciano Del Corro and Rainer Gemulla. 2013. ClausIE: Clause-Based Open Information Extraction. In Proceedings of the International World Wide Web Conferences (WWW), pages 355–366.
- Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 4171–4186.
- Etzioni et al. (2008) Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extraction from the web. Communications of the ACM, 51(12):68–74.
Etzioni et al. (2011)
Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam
Open information extraction: The second generation.
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume One, IJCAI’11, page 3–10. AAAI Press.
- Fader et al. (2011) Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying Relations for Open Information Extraction. In Proceedings of the International Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1535–1545.
- Friedrich et al. (2021) Niklas Friedrich, Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Mathias Niepert, and Goran Glavaš. 2021. Annie: An annotation platform for constructing complete open information extraction benchmark. arXiv preprint arXiv:2109.07464.
- Gashteovski et al. (2017) Kiril Gashteovski, Rainer Gemulla, and Luciano Del Corro. 2017. MinIE: Minimizing Facts in Open Information Extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2630–2640.
- Gashteovski et al. (2021) Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Goran Glavas, and Mathias Niepert. 2021. Benchie: Open information extraction evaluation based on facts, not tokens. arXiv preprint arXiv:2109.06850.
- Khot et al. (2017) Tushar Khot, Ashish Sabharwal, and Peter Clark. 2017. Answering complex questions using open information extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 311–316, Vancouver, Canada. Association for Computational Linguistics.
- Kolluru et al. (2020a) Keshav Kolluru, Vaibhav Adlakha, Samarth Aggarwal, and Soumen Chakrabarti. 2020a. OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3748–3761.
- Kolluru et al. (2020b) Keshav Kolluru, Samarth Aggarwal, Vipul Rathore, Mausam, and Soumen Chakrabarti. 2020b. IMoJIE: Iterative Memory-Based Joint Open Information Extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 5871–5886.
- Kumar et al. (2008) Anurag Kumar, D Manjunath, and Joy Kuri. 2008. Wireless networking. Elsevier.
- Lauscher et al. (2019) Anne Lauscher, Yide Song, and Kiril Gashteovski. 2019. MinScIE: Citation-centered Open Information Extraction. In 2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pages 386–387. IEEE.
- Mausam et al. (2012) Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523–534, Jeju Island, Korea. Association for Computational Linguistics.
- Mausam (2016) Mausam Mausam. 2016. Open information extraction systems and downstream applications. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI’16, page 4074–4077. AAAI Press.
- Nimishakavi et al. (2016) Madhav Nimishakavi, Uday Singh Saini, and Partha Talukdar. 2016. Relation schema induction using tensor factorization with side information. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 414–423, Austin, Texas. Association for Computational Linguistics.
- Nivre et al. (2016) Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 1659–1666.
- Ro et al. (2020) Youngbin Ro, Yukyung Lee, and Pilsung Kang. 2020. Multi^2OIE: Multilingual open information extraction based on multi-head attention with BERT. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1107–1117, Online. Association for Computational Linguistics.
- Roy et al. (2019) Arpita Roy, Youngja Park, Taesung Lee, and Shimei Pan. 2019. Supervising unsupervised open information extraction models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 728–737, Hong Kong, China. Association for Computational Linguistics.
- Saha and Mausam (2018) Swarnadeep Saha and Mausam. 2018. Open information extraction from conjunctive sentences. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2288–2299, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
- Saha et al. (2017) Swarnadeep Saha, Harinder Pal, and Mausam. 2017. Bootstrapping for numerical open IE. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 317–323, Vancouver, Canada. Association for Computational Linguistics.
- Stanovsky et al. (2018) Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised Open Information Extraction. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 885–895.
- Sun et al. (2018) Mingming Sun, Xu Li, Xin Wang, M. Fan, Y. Feng, and P. Li. 2018. Logician: A unified end-to-end neural approach for open-domain information extraction. Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining.
- Zhan and Zhao (2020) Junlang Zhan and Hai Zhao. 2020. Span Model for Open Information Extraction on Accurate Corpus. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 9523–9530.