Open Information Extraction (OpenIE) is the NLP task of generating (subject, relation, object) tuples from unstructured text e.g. “Fed chair Powell indicates rate hike” outputs (Powell, indicates, rate hike). The modifier open is used to contrast IE research in which the relation belongs to a fixed set. OpenIE has been shown to be useful for several downstream applications such as knowledge base construction Wities et al. (2017), textual entailment Berant et al. (2011), and other natural language understanding tasks Stanovsky et al. (2015). In our previous example an extraction was missing: (Powell, works for, Fed). Implicit extractions are our term for this type of tuple where the relation (“works for” in this example) is not contained in the input sentence. In both colloquial and formal language, many relations are evident without being explicitly stated. However, despite their pervasiveness, there has not been prior work targeted at implicit predicates in the general case. Implicit information extractors for some specific implicit relations such as noun-mediated relations, numerical relations, and others Pal and Mausam (2016); Saha et al. (2017); Saha and Mausam (2018) have been researched. While specific extractors are important, there are a multiplicity of implicit relation types and it would be intractable to categorize and design extractors for each one.
Past general OpenIE systems have been plagued by low recall on implicit relations Stanovsky et al. (2018). In OpenIE’s original application – web-scale knowledge base construction – this low recall is tolerable because facts are often restated in many ways Banko et al. (2007). However, in downstream NLU applications an implied relationship may be significant and only stated once Stanovsky et al. (2015).
The contribution of this work is twofold. In Section 4, we introduce our parse-based conversion tool and convert two large reading comprehension datasets into implicit OpenIE datasets. In Section 5 and 6, we train a simple neural model on this data and compare to previous systems on precision-recall curves using a new gold test set for implicit tuples.
2 Problem Statement
We suggest that OpenIE research focus on producing implicit relations where the predicate is not contained in the input span. Formally, we define implicit tuples as (subject, relation, object) tuples that:
Have a subject and object word or phrase contained in the input sentence.
Have a relation token(s) entailed by word order of the sentence but not contained in it.
These “implicit” or “common sense” tuples reproduce the relation explicitly, which may be important for downstream NLU applications using OpenIE as an intermediate schema. For example, in Figure 1, the input sentence tells us that the Norsemen swore fealty to Charles III under “their leader Rollo”. From this our model outputs (The Norse leader, was, Rollo) despite the relation never being contained in the input sentence. Our definition of implicit tuples corresponds to the “frequently occurring recall errors” identified in previous OpenIE systems Stanovsky et al. (2018): noun-mediated, sentence-level inference, long sentence, nominalization, noisy informal, and PP-attachment. We use the term implicit tuple to collectively refer to all of these situations where the predicate is absent or very obfuscated.
3 Related Work
3.1 Traditional Methods
Due to space constraints, see Niklaus et al. Niklaus et al. (2018) for a survey of of non-neural methods. Of these, several works have focused on pattern-based implicit information extractors for noun-mediated relations, numerical relations, and others Pal and Mausam (2016); Saha et al. (2017); Saha and Mausam (2018). In this work we compare to OpenIE-4 111https://github.com/knowitall/openie, ClausIE Corro and Gemulla (2013), ReVerb Fader et al. (2011), OLLIE Mausam et al. (2012), Stanford OpenIE Angeli et al. (2015), and PropS Stanovsky et al. (2016).
3.2 Neural Network Methods
Stanovsky et al. Stanovsky et al. (2018) frame OpenIE as a BIO-tagging problem and train an LSTM to tag an input sentence. Tuples can be derived from the tagger, input, and BIO CFG parser. This method outperforms traditional systems, though the tagging scheme inherently constrains the relations to be part of the input sentence, prohibiting implicit relation extraction. Cui et al. Cui et al. (2018)
bootstrap (sentence, tuple) pairs from OpenIE-4 and train a standard seq2seq with attention model using OpenNMT-pyKlein et al. (2017)
. The system is inhibited by its synthetic training data which is bootstrapped from a rule-based system.
3.3 Dataset Conversion Methods
Due to the lack of large datasets for OpenIE, previous works have focused on generating datasets from other tasks. These have included QA-SRL datasets Stanovsky and Dagan (2016) and QAMR datasets Stanovsky et al. (2018). These methods are limited by the size of the source training data which are an order of magnitude smaller than existing reading comprehension datasets.
4 Dataset Conversion Method
Span-based Question-Answer datasets are a type of reading comprehension dataset where each entry consists of a short passage, a question about the passage, and an answer contained in the passage. The datasets used in this work are the Stanford Question Answering Dataset (SQuADv1.1) Rajpurkar et al. (2016) and NewsQA Trischler et al. (2017)
. These QA datasets were built to require reasoning beyond simple pattern-recognition, which is exactly what we desire for implicit OpenIE. Our goal is to convert the QA schema to OpenIE, as was successfully done for NLIDemszky et al. (2018). The repository of software and converted datasets is available at http://toAppear.
4.1 QA Pairs to OpenIE Tuples
We started by examining SQuAD and noticing that each answer, , corresponds to either the subject, relation, or object in an implicit extraction. The corresponding question, , contains the other two parts, i.e. either the (1) subject and relation, (2) subject and object, or (3) relation and object. Which two pieces the question contains depends on the type of question. For example, “who was… factoid” type questions contain the relation (“was”) and object (the factoid), which means that the answer is the subject. In Figure 1, “Who was Rollo” is recognized as a who was question and caught by the whoParse() parser. Similarly, a question in the form of “When did person do action” expresses a subject and a relation, with the answer containing the object. For example, “When did Einstein emigrate to the US“ and answer 1933, would convert to (Einstein, when did emigrate to the US, 1933). In cases like these the relation might not be grammatically ideal, but nevertheless captures the meaning of the input sentence.
In order to identify generic patterns, we build our parse-based tool on top of a dependency parser Honnibal and Johnson (2015). It uses fifteen rules, with the proper rule being identified and run based on the question type. The rule then uses its pre-specified pattern to parse the input QA pair and output a tuple. These fifteen rules are certainly not exhaustive, but cover around eighty percent of the inputs. The tool ignores questions greater than 60 characters and complex questions it cannot parse, leaving a dataset smaller than the original (see Table 1).
Each rule is on average forty lines of code that traverses a dependency parse tree according to its pre-specified pattern, extracting the matching spans at each step. A master function parse() determines which rule to apply based on the question type which is categorized by nsubj presence, and the type of question (who/what/etc.). Most questions contain an nsubj which makes the parse task easier, as this will also be the subject of the tuple. We allow the master parse() method try multiple rules. It first tries very specific rules (e.g. a parser for how questions where no subject is identified), then falls down to more generic rules. If no output is returned after all the methods are tried we throw the QA pair out. Otherwise, we find the appropriate sentence in the passage based on the index.
4.2 Sentence Alignment
Following QA to tuple conversion, the tuple must be aligned with a sentence in the input passage. We segment the passage into sentences using periods as delimiters. The sentence containing the answer is taken as the input sentence for the tuple. Outputted sentences predominantly align with their tuple, but some exhibit partial misalignment in the case of some multi-sentence reasoning questions. 13.6% of questions require multi-sentence reasoning, so this is an upper bound on the number of partially misaligned tuples/sentences Rajpurkar et al. (2016)
. While there may be heuristics that can be used to check alignment, we didn’t find a significant number of these misalignments and so left them in the corpus. Figure 1 demonstrates the conversion process.
4.3 Tuple Examination
Examining a random subset of one hundred generated tuples in the combined dataset we find 12 noun-mediated, 33 sentence-level inference, 11 long sentence, 7 nominzalization, 0 noisy informal, 3 pp-attachment, 24 explicit, and 10 partially misaligned. With 66% implicit relations, this dataset shows promise in improving OpenIE’s recall on implicit relations.
5 Our model
Our implicit OpenIE extractor is implemented as a sequence to sequence model with attention Bahdanau et al. (2014). We use a 2-Layer LSTM Encoder/Decoder with 500 parameters, general attention, SGD optimizer with adaptive learning rate, and 0.33 dropout Hochreiter and Schmidhuber (1997). The training objective is to maximize the likelihood of the output tuple given the input sentence. In the case of a sentence having multiple extractions, it appears in the dataset once for each output tuple. At test time, beam search is used for decoding to produce the top-10 outputs and an associated log likelihood value for each tuple (used to generate the precision-recall curves in Section 7).
|Source Data||Sentences||Train Tuples||Validation Tuples|
We make use of the evaluation tool developed by Stanovsky and Dagan Stanovsky and Dagan (2016)
to test the precision and recall of our model against previous methods. We make two changes to the tool as described below.
6.1 Creating a Gold Dataset
The test corpus contained no implicit data, so we re-annotate 300 tuples from the CoNLL-2009 English training data to use as gold data. Both authors worked on different sentence sets then pruned the other set to ensure only implicit relations remained. We note that this is a different dataset than our training data so should be a good test of generalizability; the training data consists of Wikipedia and news articles, while the test data resembles corporate press release headlines.
6.2 Matching function for implicit tuples
We implement a new matching function (i.e. the function that decides if a generated tuple matches a gold tuple). The included matching functions used BoW overlap or BLEU, which aren’t appropriate for implicit relations; our goal is to assess whether the meaning of the predicted tuple matches the gold, not the only tokens. For example, the if the gold relation is “is employed by” we want to accept “works for”. Thus, we instead compute the cosine similarity of the subject, relation, and object embeddings to our gold tuple. All three must be above a threshold to evaluate as a match. The sequence embeddings are computed by taking the average of the GloVe embeddings of each word (i.e. BoW embedding)Pennington et al. (2014).
The results on our implicit corpus are shown in Figure 2 (our method in blue). For continuity with prior work, we also compare our model on the origional corpus but using our new matching function in Figure 3.
Our model outperforms at every point in the implicit-tuples PR curve, accomplishing our goal of increasing recall on implicit relations. Our system performs poorly on explicit tuples, as we would expect considering our training data. We tried creating a multi-task model, but found the model either learned to produce implit or explicit tuples. Creating a multi-task network would be ideal, though it is sufficient for production systems to use both systems in tandem.
We created a large training corpus for implicit OpenIE extractors based on SQuAD and NewsQA, trained a baseline on this dataset, and presented promising results on implicit extraction. We see this as part of a larger body of work in text-representation schemes which aim to represent meaning in a more structured form than free text. Implicit information extraction goes further than traditional OpenIE to elicit relations not contained in the original free text. This allows maximally-shortened tuples where common sense relations are made explicit. Our model should improve further as more QA datasets are released and converted to OpenIE data using our conversion tool.
- Angeli et al. (2015) Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In ACL.
- Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
- Banko et al. (2007) Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew G Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI.
- Berant et al. (2011) Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In ACL.
- Corro and Gemulla (2013) Luciano Del Corro and Rainer Gemulla. 2013. Clausie: clause-based open information extraction. In WWW.
- Cui et al. (2018) Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In ACL.
- Demszky et al. (2018) Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. CoRR, abs/1809.02922.
Fader et al. (2011)
Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011.
Identifying relations for open information extraction.
Proceedings of the Conference of Empirical Methods in Natural Language Processing (EMNLP ’11), Edinburgh, Scotland, UK.
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735–1780.
- Honnibal and Johnson (2015) Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for dependency parsing. In EMNLP.
Klein et al. (2017)
Guillaume Klein, Yoon Kim, Yuntian Deng, Josep Maria Crego, Jean Senellart, and
Alexander M. Rush. 2017.
Opennmt: Open-source toolkit for neural machine translation.In ACL.
- Mausam et al. (2012) Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL).
- Niklaus et al. (2018) Christina Niklaus, Matthias Cetto, André Freitas, and Siegfried Handschuh. 2018. A survey on open information extraction. In COLING.
- Pal and Mausam (2016) Harinder Pal and Mausam. 2016. Demonyms and compound relational nouns in nominal open ie. In AKBC@NAACL-HLT.
Pennington et al. (2014)
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014.
Glove: Global vectors for word representation.In EMNLP.
- Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP.
- Saha and Mausam (2018) Swarnadeep Saha and Mausam. 2018. Open information extraction from conjunctive sentences. In COLING.
- Saha et al. (2017) Swarnadeep Saha, Harinder Pal, and Mausam. 2017. Bootstrapping for numerical open ie. In ACL.
- Stanovsky and Dagan (2016) Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In EMNLP.
- Stanovsky et al. (2015) Gabriel Stanovsky, Ido Dagan, and Mausam. 2015. Open ie as an intermediate structure for semantic tasks. In ACL.
- Stanovsky et al. (2016) Gabriel Stanovsky, Jessica Ficler, Ido Dagan, and Yoav Goldberg. 2016. Getting more out of syntax with props. CoRR, abs/1603.01648.
- Stanovsky et al. (2018) Gabriel Stanovsky, Julian Michael, Luke S. Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In NAACL-HLT.
- Trischler et al. (2017) Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset. In Rep4NLP@ACL.
- Wities et al. (2017) Rachel Wities, Vered Shwartz, Gabriel Stanovsky, Meni Adler, Ori Shapira, Shyam Upadhyay, Dan Roth, Eugenio Martinez Camara, Iryna Gurevych, and Ido Dagan. 2017. A consolidated open knowledge representation for multiple texts. In LSDSem.