Learning Open Information Extraction of Implicit Relations from Reading Comprehension Datasets
The relationship between two entities in a sentence is often implied by word order and common sense, rather than an explicit predicate. For example, it is evident that "Fed chair Powell indicates rate hike" implies (Powell, is a, Fed chair) and (Powell, works for, Fed). These tuples are just as significant as the explicit-predicate tuple (Powell, indicates, rate hike), but have much lower recall under traditional Open Information Extraction (OpenIE) systems. Implicit tuples are our term for this type of extraction where the relation is not present in the input sentence. There is very little OpenIE training data available relative to other NLP tasks and none focused on implicit relations. We develop an open source, parse-based tool for converting large reading comprehension datasets to OpenIE datasets and release a dataset 35x larger than previously available by sentence count. A baseline neural model trained on this data outperforms previous methods on the implicit extraction task.
READ FULL TEXT