Logician: A Unified End-to-End Neural Approach for Open-Domain Information Extraction

04/29/2019
by   Mingming Sun, et al.
Baidu, Inc.
6

In this paper, we consider the problem of open information extraction (OIE) for extracting entity and relation level intermediate structures from sentences in open-domain. We focus on four types of valuable intermediate structures (Relation, Attribute, Description, and Concept), and propose a unified knowledge expression form, SAOKE, to express them. We publicly release a data set which contains more than forty thousand sentences and the corresponding facts in the SAOKE format labeled by crowd-sourcing. To our knowledge, this is the largest publicly available human labeled data set for open information extraction tasks. Using this labeled SAOKE data set, we train an end-to-end neural model using the sequenceto-sequence paradigm, called Logician, to transform sentences into facts. For each sentence, different to existing algorithms which generally focus on extracting each single fact without concerning other possible facts, Logician performs a global optimization over all possible involved facts, in which facts not only compete with each other to attract the attention of words, but also cooperate to share words. An experimental study on various types of open domain relation extraction tasks reveals the consistent superiority of Logician to other states-of-the-art algorithms. The experiments verify the reasonableness of SAOKE format, the valuableness of SAOKE data set, the effectiveness of the proposed Logician model, and the feasibility of the methodology to apply end-to-end learning paradigm on supervised data sets for the challenging tasks of open information extraction.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/26/2018

Open Information Extraction with Global Structure Constraints

Extracting entities and their relations from text is an important task f...
07/24/2017

Analysing Errors of Open Information Extraction Systems

We report results on benchmarking Open Information Extraction (OIE) syst...
09/15/2021

AnnIE: An Annotation Platform for Constructing Complete Open Information Extraction Benchmark

Open Information Extraction (OIE) is the task of extracting facts from s...
04/26/2018

Integrating Local Context and Global Cohesiveness for Open Information Extraction

Extracting entities and their relations from text is an important task f...
12/22/2018

Distant Supervision for Relation Extraction with Linear Attenuation Simulation and Non-IID Relevance Embedding

Distant supervision for relation extraction is an efficient method to re...
12/17/2020

InSRL: A Multi-view Learning Framework Fusing Multiple Information Sources for Distantly-supervised Relation Extraction

Distant supervision makes it possible to automatically label bags of sen...
03/29/2022

OdontoAI: A human-in-the-loop labeled data set and an online platform to boost research on dental panoramic radiographs

Deep learning has remarkably advanced in the last few years, supported b...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Semantic applications typically work on the basis of intermediate structures derived from sentences. Traditional word-level intermediate structures, such as POS-tags, dependency trees and semantic role labels, have been widely applied. Recently, entity and relation level intermediate structures attract increasingly more attentions.

In general, knowledge based applications require entity and relation level information. For instance, in (Riedel2013, ), the lexicalized dependency path between two entity mentions was taken as the surface pattern facts. In distant supervision (Mintz2009, ), the word sequence and dependency path between two entity mentions were taken as evidence of certain relation. In Probase (Wu2012, ), candidates of taxonomies were extracted by Hearst patterns (Hearst1992, ). The surface patterns of relations extracted by Open Information Extraction (OIE) systems (Banko2007a, ; Etzioni2011, ; Schmitz2012, ; Fu2010, ; Fader2011, ) worked as the source of question answering systems  (Khot2017, ; Fader2014, )

. In addition, entity and relation level intermediate structures have been proven effective in many other tasks such as text summarization 

(Christensen2013, ; Christensen2014, ; Mausam2016, ), text comprehension, word similarity, word analogy (Stanovsky2015, ), and more.

The task of entity/relation level mediate structure extraction studies how facts about entities and relations are expressed by natural language in sentences, and then expresses these facts in an intermediate (and convenient) format. Although entity/relation level intermediate structures have been utilized in many applications, the study of learning these structures is still in an early stage.

Firstly, the problem of extracting different types of entity/relation level intermediate structures has not been considered in a unified fashion. Applications generally need to construct their own handcrafted heuristics to extract required entity/relation level intermediate structures, rather than consulting a commonly available NLP component, as they do for word level intermediate structures. Open IE-v4 system (

http://knowitall.github.io/openie/) attempted to build such components by developing two sub-systems, with each extracting one type of intermediate structures, i.e., SRLIE (Christensen2011, ) for verb based relations, and ReNoun (Pal2016, ; Yahya2014, ) for nominal attributes. However, important information about descriptive tags for entities and concept-instance relations between entities were not considered.

Secondly, existing solutions to the task either used pattern matching technique 

(Wu2012, ; Banko2007a, ; Schmitz2012, ; Fu2010, ), or were trained in a self-supervised manner on the data set automatically generated by heuristic patterns or info-box matching (Fu2010, ; Banko2007a, ; Fader2011, ). It is well-understood that pattern matching typically does not generalize well and the automatically generated samples may contain lots of noises.

This paper aims at tackling some of the well-known challenging problems in OIE systems, in a supervised end-to-end deep learning paradigm. Our contribution can be summarized as three major components: SAOKE format, SAOKE data set, and Logician.

Symbol Aided Open Knowledge Expression (SAOKE) is a knowledge expression form with several desirable properties: (i) SAOKE is literally honest and open-domain. Following the philosophy of OIE systems, SAOKE uses words in the original sentence to express knowledge. (ii) SAOKE provides a unified view over four common types of knowledge: relation, attribute, description and concept. (iii) SAOKE is an accurate expression. With the aid of symbolic system, SAOKE is able to accurately express facts with separated relation phrases, missing information, hidden information, etc.

SAOKE Data Set is a human annotated data set containing 48,248 Chinese sentences and corresponding facts in the SAOKE form. We publish the data set for research purpose. To the best of our knowledge, this is the largest publicly available human annotated data set for open-domain information extraction tasks.

Logician is a supervised end-to-end neural learning algorithm which transforms natural language sentences into facts in the SAOKE form. Logician is trained under the attention-based sequence-to-sequence paradigm, with three mechanisms: restricted copy mechanism to ensure literally honestness, coverage mechanism to alleviate the under extraction and over extraction problem, and gated dependency attention mechanism to incorporate dependency information. Experimental results on four types of open information extraction tasks reveal the superiority of the Logician algorithm.

Our work will demonstrate that SAOKE format is suitable for expressing various types of knowledge and is friendly to end-to-end learning algorithms. Particularly, we will focus on showing that the supervised end-to-end learning is promising for OIE tasks, to extract entity and relation level intermediate structures.

The rest of this paper is organized as follows. Section 2 presents the details of SAOKE. Section 3 describes the human labeled SAOKE data set. Section 4 describes the Logician algorithm and Section 5 evaluates the Logician algorithm and compares its performance with the state-of-the-art algorithms on four OIE tasks. Section 6 discusses the related work and Section 7 concludes the paper.

2. SAOKE Format:  Symbol Aided Open Knowledge Expression

Chinese English Translation
Sentence 李白(701年-762年), 深受庄子思想影响,爽朗大方, 爱饮酒作诗,喜交友, 代表作有《望庐山瀑布》等著名诗歌。 Li Bai (701 - 762), with masterpieces of famous poetries such as "Watching the Lushan Waterfall", was deeply influenced by Zhuangzi’s thought, hearty and generous, loved to drink and write poetry, and liked to make friends.
Relations (李白, 深受X影响, 庄子思想)
(李白, 爱, [饮酒|作诗]) (李白, 喜, 交友)
(Li Bai, deeply influenced by, Zhuangzi’s thought) (Li Bai, loved to, [drink| write poetry]) (Li Bai, liked to, make friends)
Attribute (李白, BIRTH, 701年) (李白, DEATH, 762年)
(李白, 代表作, 《望庐山瀑布》)
(Li Bai, BIRTH, 701)(Li Bai, DEATH, 762) (Li Bai, masterpiece, "Watching the Lushan Waterfall")
Description (李白, DESC, 爽朗大方) (Li Bai, DESC, hearty and generous)
Concept (《望庐山瀑布》, ISA, 著名诗歌) ("Watching the Lushan Waterfall", ISA, famous poetry)
Table 1. Expected facts of an example sentence.

When reading a sentence in natural language, humans are able to recognize the facts involved in the sentence and accurately express them. In this paper, Symbolic Aided Open Knowledge Expression (SAOKE) is proposed as the form for honestly recording these facts. SAOKE expresses the primary information of sentences in n-ary tuples , and (in this paper) neglects some auxiliary information. In the design of SAOKE, we take four requirements into consideration: completeness, accurateness, atomicity and compactness.

2.1. Completeness

After having analyzed a large number of sentences, we observe that the majority of facts can be classified into the following classes:

  1. Relation: Verb/preposition based n-ary relations between entity mentions (Christensen2011, ; Schmitz2012, );

  2. Attribute:Nominal attributes for entity mentions (Pal2016, ; Yahya2014, );

  3. Description: Descriptive phrases of entity mentions (Chakrabarti2011, );

  4. Concept: Hyponymy and synonym relations among concepts and instances (VeredShwartz2016a, ).

SAOKE is designed to express all these four types of facts. Table 1 presents an example sentence and the involved facts of these four classes in the SAOKE form. We should mention that the sentences and facts in English are directly translated from the corresponding Chinese sentences and facts, and the facts in English may not be the desired outputs of OIE algorithms for those English sentences due to the differences between Chinese and English languages.

2.2. Accurateness

SAOKE adopts the ideology of “literally honest”. That is, as much as possible, it uses the words in the original sentences to express the facts. SAOKE follows the philosophy of OIE systems to express various relations without relying on any predefined schema system. There are, however, exceptional situations which are beyond the expression ability of this format. Extra symbols will be introduced to handle these situations, which are explained as follows.

Separated relation phrase: In some languages such as Chinese, relation phrases may be divided into several parts residing in discontinued locations of the sentences. To accurately express these relation phrases, we add placeholders (,,, etc) to build continuous and complete expressions. “深受X影响” (“deeply influenced by X” in English) in the example of Table 1 is an instance of relation phrase after such processing.

Abbreviated expression: We explicitly express the information in abbreviated expressions by introducing symbolic predicates. For example, the expression of “Person (birth date - death date)” is transformed into facts: (Person, BIRTH, birth date) (Person, DEATH, death date), and the synonym fact involved in “NBA (National Basketball Association)” is expressed in the form of (NBA, = , National Basketball Association) .

Hidden information: Description of an entity and hyponymy relation between entities are in general expressed implicitly in sentences, and are expressed by symbolic predicates “DESC” and “ISA” respectively, as in Table 1. Another source of hidden information is the address expression. For example, “法国巴黎” (“Paris, France” in English) implies the fact (巴黎, LOC, 法国) ((Paris, LOC, France) in English), where the symbol “LOC” means “location”.

Missing information: A sentence may not tell us the exact relation between two entities, or the exact subject/objects of a relation, which are required to be inferred from the context. We use placeholders like “” to denote the missing subjects/objects, and “” to denote the missing predicates.

2.3. Atomicity

Atomicity is introduced to eliminate the ambiguity of knowledge expressions. In SAOKE format, each fact is required to be atomic, which means that: (i) it is self-contained for an accurate expression; (ii) it cannot be decomposed into multiple valid facts. We provide examples in Table 2 to help understand these two criteria.

Note that the second criterion implies that any logical connections (including nested expressions) between facts are neglected (e.g., the third case in Table 2 ). This problem of expression relations between facts will be considered in the future version of SAOKE.

Sentence Wrong Facts Correct Facts
Chinese 山东的GDP高于所有西部的省份。 (山东的GDP, 高于, 所有省份) (所有省份, DESC, 西部的) (山东的GDP, 高于, 西部的所有省份)
English Shandong’s GDP is higher than in all western provinces. (Shandong’s GDP, is higher than, all provinces) (all provinces, DESC, western) (Shandong’s GDP, is higher than, all western provinces)
Chinese 李白游览了雄奇灵秀的泰山。 (李白, 游览, 雄奇灵秀的泰山) (李白, 游览, 泰山) (泰山, DESC, 雄奇灵秀)
English Li Bai visited the magnificent Mount Tai. (Li Bai, visited, the magnificent Mount Tai) (Li Bai, visited, the Mount Tai)(the Mount Tai, DESC, magnificent)
Chinese 在美国的帮助下,英国抵挡住了德国的进攻。 (英国, 在X的帮助下抵挡住了Y的进攻, 美国, 德国) (美国, 帮助, 英国)(英国, 抵挡住了X的进攻, 德国)
English With the help of the US, the British resisted the attack from German. (the British, resisted the attack from X with the help of Y, German, the US) (the US, helped, the British)( the British, resisted the attack from, German)
Table 2. Example sentence and corresponding wrong/correct facts.

2.4. Compactness

Natural language may express several facts in a compact form. For example, in a sentence “李白爱饮酒作诗” (“Li Bai loved to drink and write poetry” in English ), according to atomicity, two facts should be extracted: (李白, 爱, 饮酒)(李白, 爱, 作诗) ( (Li Bai, loved to, drink)(Li Bai, loved to, write poetry) in English ). In this situation, SAOKE adopts a compact expression to merge these two facts into one expression: (李白, 爱, [饮酒|作诗]) ( (Li Bai, loved to, [drink| write poetry]) in English ).

The compactness of expressions is introduced to fulfill, but not to violate the rule of “literally honest”. SAOKE does not allow merging facts if facts are not expressed compactly in original sentences. By this means, the differences between the sentences and the corresponding knowledge expressions are reduced, which may help reduce the complexity of learning from data in SAOKE form.

With the above designs, SAOKE is able to express various kinds of facts, with each historically considered by different open information extraction algorithms, for example, verb based relations in SRLIE (Christensen2011, ) and nominal attributes in ReNoun (Pal2016, ; Yahya2014, ), descriptive phrases for entities in EntityTagger (Chakrabarti2011, ), and hypernyms in HypeNet (VeredShwartz2016a, ). SAOKE introduces the atomicity to eliminate the ambiguity of knowledge expressions, and achieves better accuracy and compactness with the aid of the symbolic expressions.

3. SAOKE Data Set

We randomly collect sentences from Baidu Baike (http://baike.baidu.com

), and send those sentences to a crowd sourcing company to label the involved facts. The workers are trained with labeling examples and tested with exams. Then the workers with high exam scores are asked to read and understand the facts in the sentences, and express the facts in the SAOKE format. During the procedure, one sentence is only labeled by one worker. Finally, more than forty thousand sentences with about one hundred thousand facts are returned to us. The manual evaluation results on 100 randomly selected sentences show that the fact level precision and recall is 89.5% and 92.2% respectively. Table 

3 shows the proportions of four types of facts (described in Section 2.1) contained in the data set. Note that the facts with missing predicates represented by “P” are classified into “Unknown”. We publicize the data set at https://ai.baidu.com/broad/subordinate?dataset=saoke.

Relation Attribute Description Concept Unknown
Ratio 76.02% 7.25% 9.89% 3.64% 3.20%
Table 3. Ratios of facts of each type in SAOKE data set.

Prior to the SAOKE data set, an annotated data set for OIE tasks with 3,200 sentences in 2 domains was released in (Stanovsky2016, ) to evaluate OIE algorithms, in which the data set was said (Stanovsky2016, ) “13 times larger than the previous largest annotated Open IE corpus”. The SAOKE data set is 16 times larger than the data set in (Stanovsky2016, ). To the best of our knowledge, SAOKE data set is the largest publicly available human labeled data set for OIE tasks. Furthermore, the data set released in (Stanovsky2016, ) was generated from a QA-SRL data set (He2015a, ), which indicates that the data set only contains facts that can be discovered by SRL (Semantic Role Labeling) algorithms, and thus is biased, whereas the SAOKE data set is not biased to an algorithm. Finally, the SAOKE data set contains sentences and facts from a large number of domains.

4. Logician

Given a sentence and a set of expected facts (with all the possible types of facts) in SAOKE form, we join all the facts in the order that annotators wrote them into a char sequence as the expected output. We build Logician under the attention-based sequence-to-sequence learning paradigm, to transform into , together with the restricted copy mechanism, the coverage mechanism and the gated dependency mechanism.

4.1. Attention based Sequence-to-sequence Learning

The attention-based sequence-to-sequence learning (DzmitryBahdana2014, ) have been successfully applied to the task of generating text and patterns. Given an input sentence , the target sequence and a vocabulary (including the symbols introduced in Section 2 and the OOV (out of vocabulary) tag ) with size , the words and

can be represented as one-hot vectors

and with dimension , and transformed into

-dimensional distributed representation vectors by an embedding transform

and respectively, where . Then the sequence of is transformed into a sequence of -dimensional hidden states

using bi-directional GRU (Gated Recurrent Units) network 

(Cho2014, ), and the sequence of is transformed into a sequence of -dimensional hidden states using GRU network.

For each position in the target sequence, the decoder learns a dynamic context vector to focus attention on specific location in the input hidden states

, then computes the probability of generated words by

, where is the hidden state of the GRU decoder, is the word selection model (details could be found in (DzmitryBahdana2014, )), and is computed as where and is the alignment model to measure the strength of focus on the -th location. ,, and are weight matrices.

4.2. Restricted Copy Mechanism

The word selection model employed in (DzmitryBahdana2014, ) selects words from the whole vocabulary , which evidently violates the “literal honest” requirement of SAOKE. We propose a restricted version of copy mechanism (Gu2016, ) as the word selection model for Logician:

We collect the symbols introduced in Section 2 into a keyword set ”, “”, “”, “”, “”, “”, “”, “)”, “”,“”, “”, “”, “”, “”, “”, “ where “” is the separator of elements of fact tuples. “”, “”, “”, “” are placeholders . When the decoder is considering generating a word , it can choose from either or .

(1)

where is the probability of copying from and is the probability of selecting from . Since and there are no unknown words in this problem setting, we compute and in a simpler way than that in (Gu2016, ), as follows:

where the (generic) is the normalization term, is one of keywords, is the one-hot indicator vector for , ,, and

is a nonlinear activation function.

4.3. Coverage Mechanism

In practice, Logician may forget to extract some facts (under-extraction) or extract the same fact many times (over-extraction). We incorporate the coverage mechanism (Tu2016, ) into Logician to alleviate these problems. Formally, when the decoder considers generating a word , a coverage vector is introduced for each word , and updated as follows:

where is the element-wise multiplication operator. The update gate and the reset gate are defined as, respectively,

where

is a logistic sigmoid function. The coverage vector

contains the information about the historical attention focused on , and is helpful for deciding whether should be extracted or not. The alignment model is updated as follows (Tu2016, ):

where .

4.4. Gated Dependency Attention

The semantic relationship between candidate words and the previously decoded word is valuable to guide the decoder to select the correct word. We introduce the gated dependency attention mechanism to utilize such guidance.

For a sentence , we extract the dependency tree using NLP tools such as CoreNLP (Manning2014, ) for English and LTP (Che2010, ) for Chinese, and convert the tree into a graph by adding reversed edges with a revised labels (for example, adding for edge in the dependency tree). Then for each pair of words , the shortest path with labels in the graph is computed and mapped into a sequence of -dimensional distributed representation vectors by the embedding operation. One can employ RNN network to convert this sequence of vectors into a feature vector, but RNN operation is time-consuming. We simply concatenate vectors in short paths (3) into a

dimensional vector and feed the vector into a two-layer feed forward neural network to generate an

-dimensional feature vector . For long paths with , is set to a zero vector. We define dependency attention vector , where is the sharpened probability defined in Equation (1). If , represents the semantic relationship between and . If , then is close to zero. To correctly guide the decoder, we need to gate to remember the previous attention vector sometimes (for example, when is selected), and to forget it sometimes (for example, when a new fact is started). Finally, we define ) as the gated dependency attention vector, where is the GRU gated function, and update the alignment model as follows:

where .

4.5. Post processing

For each sequence generated by Logician, we parse it into a set of facts, remove tuples with illegal format or duplicated tuples. The resultant set is taken as the output of the Logician.

5. Empirical Evaluation

5.1. Experimental Design

We first measure the utility of various components in Logician to select the optimal model, and then compare this model to the state-of-the-art methods in four types of information extraction tasks: verb/preposition-based relation, nominal attribute, descriptive phrase and hyponymy relation. The SAOKE data set is split into training set, validating set and testing set with ratios of 80%, 10%, 10%, respectively. For all algorithms involved in the experiments, the training set can be used to train the model, the validating set can be used to select an optimal model, and the testing set is used to evaluate the performance.

5.1.1. Evaluation Metrics

For each instance pair in the test set, where is the input sentence and is the formatted string of ground truth of facts, we parse into a set of tuples . Given an open information extraction algorithm, it reads and produces a set of tuples . To evaluate how well the approximates , we need to match each to a ground truth fact and check whether tells the same fact as . To conduct the match, we compute the similarity between each predicted fact in and each ground truth fact in , then find the optimal matching to maximize the sum of matched similarities by solving a linear assignment problem (assignment_problem, ). In the procedure, the similarity between two facts is defined as

where and denote the -th element of tuple and respectively, denotes the gestalt pattern matching (Ratcliff:1988:PMG, ) measure for two strings and returns the length of the tuple.

Given a matched pair of and , we propose an automatic approach to judge whether they tell the same fact. They are judged as telling the same fact if one of the following two conditions is satisfied:

  • , and ;

  • , and ;

where is a function formatting a fact into a string by filling the arguments into the placeholders of the predicate.

With the automatic judgment, the precision (), recall () and -score over a test set can be computed. By defining a confidence measure and ordering the facts by their confidences, a precision-recall curve can be drawn to illustrate the overall performance of the algorithm. For Logician, the confidence of a fact is computed as the average of log probabilities over all words in that fact.

Beyond the automatic judgment, human evaluation is also employed. Given an algorithm and the corresponding fact confidence measure, we find a threshold that produces approximately 10% recall (measured by automatic judgment) on the validation set of SAOKE data set. A certain number of sentences (200 for verb/preposition based relation extraction task, and 1000 for other three tasks) are randomly chosen from the testing set of SAOKE data set, and the facts extracted from these sentences are filtered with that threshold. Then we invite three volunteers to manually refine the labeled set of facts for each sentence and vote to decide whether each filtered fact is correctly involved in the sentence. The standard precision, recall and -score are reported as the human evaluation results.

5.1.2. Training the Logician Model

For each instance pair in the training set of SAOKE data set, we split and into words using LTP toolset (Che2010, ), and words appearing in more than 2 sentences are added to the vocabulary. By adding the OOV (out of vocabulary) tag, we finally obtain a vocabulary with size . The dimension of all embedding vectors is set to , and the dimension of hidden states is set to . We use a three-layer bi-directional GRU with dimension 128 to encode into hidden states , and a two-layer GRU with hidden-dimension 256 to encode the sequence of into hidden states . Finally, the Logician network is constructed as stated in Section 4

. The Logician is then trained using stochastic gradient descent (SGD) with RMSPROP 

(Hinton2012x, )

strategy for 20 epochs with batch size 10 on the training set of SAOKE data set. The model with best

-score by automatic judgment on the validation set is selected as the trained model. When the model is trained, given a sentence, we employ the greedy search procedure to produce the fact sequences.

5.2. Evaluating Components’ Utilities

In this section, we analyze the effects of components involved in Logician: restricted copy, coverage, and gated dependency. Since the restricted copy mechanism is the essential requirement of Logician in order to achieve the goal of literally honest, we take the Logician with only copy mechanism (denoted by ) as the baseline, and analyze the effeteness of coverage mechanism (denoted by ), gated dependency mechanism (denoted by ) and both (denoted by ). Furthermore, there is another option of whether or not to involve shallow semantic information such as POS-tag and NER-tag into the model. For models involving such information, the POS-tag and NER-tag of each word in sentence are annotated using LTP. For each word in that is not any keyword in , the POS-tag and NER-tag are copied from the corresponding original word in . For each keyword in , a unique POS-tag and a unique NER-tag are assigned to it. Finally, for each word in or , the POS-tag and NER-tag are mapped into -dimensional distributed representation vectors and are concatenated into or to attend the training.

All models are trained using the same settings described in above section, and the default output facts (without any confidence filtering) are evaluated by the automatic judgment. The results are reported in Table 4. From the results, we can see that the model involving all the components and shallow tag information archives the best performance. We use that model to attend the comparisons with existing approaches.

With Shallow Tag No Shallow Tag
0.375 0.412 0.393 0.378 0.431 0.403
0.390 0.390 0.390 0.307 0.417 0.353
0.427 0.371 0.397 0.335 0.419 0.372
0.443 0.419 0.431 0.373 0.435 0.402
Table 4. Analysis of Components involved in Logician.

5.3. Comparison with Existing Approaches

5.3.1. Verb/preposition-Based Relation Extraction

In the task of extracting verb/preposition based facts, we compare our Logician with the following state-of-the-art Chinese OIE algorithms:

SRLIE: our implementation of SRLIE (Christensen2011, ) for the Chinese language, which first uses LTP tool set to extract the semantic role labels, and converts the results into fact tuples using heuristic rules. The confidence of each fact is computed as the ratio of the number of words in the fact to the number of words in the shortest fragment of source sentence that contains all words in the fact.

ZORE : the Chinese Open Relation Extraction system (Qiu2014, ), which builds a set of patterns by bootstrapping based on dependency parsing results, and uses the patterns to extract relations. We used the program provided by the author of ZORE system  (Qiu2014, ) to generate the extraction results in XML format, and developed an algorithm to transform the facts into n-ary tuples, where auxiliary information extracted by ZORE is removed. The confidence measure for ZORE is the same as that for SRLIE.

SRL: our implementation of the states-of-the-art SRL algorithm proposed in (He2017, ) with modifications to fit OIE tasks. extracts facts in two steps: (i) Predicate head word detection: detects head word for predicate of each possible fact, where head word of a predicate is the last word in the predicate depending on words outside the predicate in the dependency tree. (ii) Element phrase detection: For each detected head word, detects the subject phrase, predicate phrase and object phrases by tagging the sentence with an extended BIOE tagging scheme, which tags the word neighboring the separation point of the phrase by “M” to cope with the separated phrase. We modify the code provided by the author of (He2017, ) to implement above strategy, and then train a model with the same parameter setting in (He2017, ) on the training set of SAOKE data set. The confidence measure for is computed as the average of log probabilities over all tags of words in facts. Note that can extract both verb/preposition based relation and nominal attributes, but in this section, we only evaluate the results of the former type of facts.

The precision-recall curves of Logician and above three comparison algorithms are shown in Figure (a)a, and the human evaluation results are shown in the first section of Table 5.

(a) Verb/preposition-based Relation
(b) Nominal Attribute
(c) Descriptive Phrase
(d) Hyponymy
Figure 1. Performance comparison on four types of information extraction tasks.

5.3.2. Nominal Attribute Extraction

The state-of-the-art
nominal attribute extraction method is ReNoun (Pal2016, ; Yahya2014, ). However, it relies on a pre-constructed English attribute schema system (Gupta2014, ) which is not available for Chinese, so it is not an available baseline for Chinese. Since can extract nominal attributes, we compare Logician with on this task. The precision-recall curves of Logician and on the nominal attribute extraction task are shown in Figure (b)b, and the human evaluation results are shown in the second section of Table 5.

5.3.3. Descriptive Phrase Extraction

Descriptive phrase extraction has been considered in (Chakrabarti2011, ), in which domain names are required to develop patterns to extract candidates for descriptive phrases, so this method is not applicable to open domain tasks. We develop a baseline algorithm (called Semantic Dependency Description Extractor, SDDE) to extract descriptive phrase. It extracts semantic dependency relation between words using LTP toolset, and for each noun which is the parent of some semantic “Desc” relations, identifies a noun phrase with as its heading word, assembles a descriptive phrase containing all words with “Desc” relation to , and finally outputs the fact “(, , )”. The confidence of fact in SDDE is computed as the ratio of the number of adverbs and adjectives in to the number of words in . The precision-recall curves of Logician and SDDE on the descriptive phrase extraction task are shown in Figure (c)c, and the human evaluation results are shown in the third section of Table 5.

5.3.4. Hyponymy Extraction

HypeNet (VeredShwartz2016a, ) is the state-of-the-art algorithm recommended for hyponymy extraction (Wang2017a, ), which judges whether hyponymy relation exists between two given words. To make it capable of judging hyponymy relation between two phrases, we replace the word embedding vector component in HypeNet by an LSTM network. Two modified HypeNet models are built using different training data sets: (i) : using the pairs of phrases with ISA relation in the training set of SAOKE data set (9,407 pairs after the compact expression expansion); (ii) : besides the training set for , adding two Chinese hyponymy data sets (1.4 million pair of words in total in hyponymy relation): Tongyici Cilin (Extended) (CilinE for short) (Che2010, ) and cleaned Wikipedia Category data (Li2015, ). In both cases, the sentences from both Chinese Wikipedia pages and training set of SAOKE data set are taken as the background corpus for the HypeNet algorithm. In the testing phase, the trained models are used to predict whether the hyponymy relation exists for each pair of noun phrases/words in sentences of the testing set of SAOKE data set. The confidence of a judgment is the predicted probability of the existence of hyponymy relation. The precision-recall curves of Logician, and are shown in Figure (d)d, and the human evaluation results in the fourth section of Table 5.

Task Method
1.  Relation SRLIE 0.166 0.119 0.139
ZORE 0.300 0.136 0.187
0.610 0.130 0.214
Logician 0.883 0.147 0.252
2.  Attribute 0.362 0.102 0.159
Logician 0.603 0.126 0.208
3.  Description SDDE 0.135 0.146 0.140
Logician 0.392 0.109 0.170
4.  Hyponymy 0.007 0.016 0.010
0.442 0.090 0.149
Logician 0.317 0.129 0.183
Table 5. Human evaluation results on four types of information extraction tasks.

5.4. Results Analysis

The experimental results reveal that, Logician outperforms the comparison methods with large margin in first three tasks. For hyponymy detection tasks, Logician overwhelms the using the same training data, and produces comparable results to with much less training data. Table 6 exhibits several example sentences and the facts extracted by these algorithms.

The poor performance of pattern-based methods is plausibly due to the noise in SAOKE data set. The sentences in SAOKE data set are randomly selected from a web encyclopedia, with free and casual writing style, are thus more noisy than the training data of NLP toolset used by these methods. In this situation, the NLP toolset may produce poor results, so do the pattern-based methods.

Models learned from the SAOKE data set archive much better performance. Nevertheless, extracts each fact without knowing whether a candidate word has been used in other facts, which results in the misleading overlap of the word “学” (“Learn” in English) between two facts in the first case of Table 6. Similarly, and focus on the semantic vectors of pairs of phrases and their dependency paths in the background corpus. They extract each fact independently from other facts and hence do not know whether there have been any other relations extracted about these two phrases. In other words, for those comparison methods, an important source of information is neglected and a global optimization for all facts involved in sentences is absent.

On the contrary, Logician performs global optimization over the facts involved in each sentence by the sequence-to-sequence learning paradigm with the help of the coverage mechanism, in which facts compete each other to attract the attention of words, but also cooperate to share words. Valuable information is shared between these multiple tasks, which makes Logician consistently superior to other algorithms in these tasks.

Furthermore, and HypeNet methods suffer from the OOV problem, such as unfamiliar words/phrases like the person name and school name in the last case of Table 6. In this situation they may fail to produce a reasonable result. Logician is able to cope with unfamiliar words/phrases by exploiting the context information using deep RNN network with the help of copy mechanism.

Chinese English Translation Label
Sentence 学道访仙,晚年修道于石笋山 Learn the Tao, visit the immortal, practice Taoism in the Stalagmite Hill in the old age
SRLIE (X,学,道)(X,访,仙) (X, learn, the Tao)(X, visit, the immortal)
1.  Relation ZORE (学道,访,仙)
(学道,修道于,石笋山 )
(Learn the Tao, visit, the immortal) (Learn the Tao, practice Taoism in, the Stalagmite Hill) ✗✗
(X,学,道访)(学,修道于,石笋山) (X, learn, Tao visit)(Learn, practice Taoism in, the Stalagmite Hill)
Logician (X,学,道)(X,访,仙)
(X,修道于,石笋山)
(X, learn, the Tao)(X, visit, the immortal) (X, practice Taoism in, the Stalagmite Hill) ✓✓
Sentence 全村辖区总面积约10平方公里,其中耕地132公顷。 The whole village area of about 10 square kilometers, of which 132 hectares of arable land.
2.  Attribute (全村辖区,总面积,约10平方公里) (The village, whole area, about 10 square kilometers)
Logician (全村辖区,总面积,约10平方公里)
(全村,耕地,132公顷)
(The village, whole area, about 10 square kilometers) (The village, arable land, 132 hectares) ✓✓
Sentence 硫酸钙较为常用,其性质稳定,无嗅无味,微溶于水 Calcium sulfate is commonly used, its properties are stable, odorless and tasteless, slightly soluble in water
3.  Description SDDE No Recall No Recall
Logician (硫酸钙,DESC,[性质稳定|无嗅无味])
(硫酸钙,DESC,稳定)
(Calcium sulfate, DESC, [properties stable | odorless and tasteless]) (Calcium sulfate, DESC, stable) ✓✗
Sentence 蔡竞,男,汉族,四川射洪人,西南财经大学经济学院毕业,经济学博士。 Cai Jing, male, Han Chinese, a Sichuan Shehong native, graduated from the economics school of Southwestern University of Finance and Economics (SUFE), and a Ph. D. in economics.
4.  Hyponymy (经济学博士,ISA, 蔡) (Ph. D. in economics,ISA,Cai)
(西南财经大学经济学院,ISA,经济学博士)
(西南财经大学经济学院,ISA,四川)
(经济学博士,ISA,西南财经大学经济学院)
(the economics school of SUFE, ISA, Ph. D. in economics) (the economics school of SUFE,ISA,Sichuan) (Ph. D. in economics,ISA,the economics school of SUFE) ✗✗✗
Logician (蔡竞,ISA,[男|汉族|四川射洪人]) (Cai Jing,ISA,[male |Han Chinese | Sichuan Shehong native])
Table 6. Example of extraction from a sentence for each task.

5.5. Extraction Error Analysis of Logician

We do a preliminary analysis for the results produced by the Logician model. The most notable problem is that it is unable to recall some facts for long or complex sentences. The last case in Table 6 exhibits such situation, where the fact (蔡竞,ISA,经济学博士)((Cai Jing, ISA, Ph. D. in economics) in English) is not recalled. This phenomenon indicates that the coverage mechanism may lose effectiveness in this situation. The second class of error is incomplete extraction, as exhibited in the third case in Table 6

. Due to the incomplete extraction, the left parts may interfere the generation of other facts, and result in nonsense results, which is the third class of error. We believe it is helpful to introduce extra rewards into the learning procedure of Logician to overcome these problems. For example, the reward could be the amount of remaining information left after the fact extraction, or the completeness of extracted facts. Developing such rewards and reinforcement learning algorithms using those rewards to refine Logician belongs to our future works.

6. Related Works

6.1. Knowledge Expressions

Tuple is the most common knowledge expression format for OIE systems to express n-ary relation between subject and objects. Beyond such information, ClausIE (Corro2013, ) extracts extra information in the tuples: a complement, and one or more adverbials, and OLLIE (Schmitz2012, ) extracts additional context information. SAOKE is able to express n-ary relations, and can be easily extended to support the knowledge extracted by ClausIE, but needs to be redesigned to support context information, which belongs to our future work.

However, there is a fundamental difference between SAOKE and tuples in traditional OIE systems. In traditional OIE systems, knowledge expression is generally not directly related to the extraction algorithm. It is a tool to reorganize the extracted knowledge into a form for further easy reading/storing/computing. However, SAOKE is proposed to act as the direct learning target of the end-to-end Logician model. In such end-to-end framework, knowledge representation is the core of the system, which decides what information would be extracted and how complex the learning algorithm would be. To our knowledge, SAOKE is the first attempt to design a knowledge expression friendly to the end-to-end learning algorithm for OIE tasks. Efforts are still needed to make SAOKE more powerful in order to express more complex knowledge such as events.

6.2. Relation Extraction

Relation extraction is the task to identify semantic connections between entities. Major existing relation extraction algorithms can be classified into two classes: closed-domain and open-domain. Closed-domain algorithms are learnt to identify a fixed and finite set of relations, using supervised methods (Kambhatla2004, ; Zelenko2003, ; Miwa2016, ; Zheng2017, ) or weakly supervised methods (Mintz2009, ; Lin2016, ), while the open-domain algorithms, represented by aforementioned OIE systems, discover open-domain relations without predefined schema. Beyond these two classes, methods like universal schema (Riedel2013a, ) are able to learn from both data with fixed and finite set of relations, such as relations in Freebase, and data with open-domain surface relations produced by heuristic patterns or OIE systems.

Logician can be used as an OIE system to extract open-domain relation between entities, and act as sub-systems for knowledge base construction/completion with the help of schema mapping (Soderland2010, ). Compared with existing OIE systems, which are pattern-based or self-supervised by labeling samples using patterns (Mausam2016, ), to our knowledge Logician is the first model trained in a supervised end-to-end approach for OIE task, which has exhibited powerful ability in our experiments. There are some neural based end-to-end systems (Miwa2016, ; Zheng2017, ; Lin2016, ) proposed for relation extraction, but they all aim to solve the close-domain problem.

However, Logician is not limited to relation extraction task. First, Logician extracts more information beyond relations. Second, Logician focuses on examining how natural languages express facts (Etzioni2011, ), and producing helpful intermediate structures for high level tasks.

6.3. Language to Logic

Efforts had been made to map natural language sentences into logical form. Some approaches such as (Kate2005, ; Zettlemoyer2005, ; Yin2015, ; Dong2016a, ) learn the mapping under the supervision of manually labeled logical forms, while others (Cai2013b, ; Kwiatkowski2013, ) are indirectly supervised by distant information, system rewards, etc. However, all previous works rely on a pre-defined, domain specific logical system, which limits their ability to learn facts out of the pre-defined logical system.

Logician can be viewed as a system that maps language to natural logic, in which the majority of information is expressed by natural phrase. Other than systems mentioned above which aim at execution using the logical form, Logician focuses on understanding how the fact and logic are expressed by natural language. Further mapping to domain-specific logical system or even executor can be built on the basis of Logician’s output, and we believe that, with the help of Logician, the work would be easier and the overall performance of the system may be improved.

6.4. Facts to Language

The problem of generating sentences from a set of facts has attracted a lot of attentions  (Wiseman2017, ; Chisholm2017, ; Agarwal2017, ; Vougiouklis2017, ). These models focus on facts with a predefined schema from a specific problem domain, such as people biographies and basketball game records, but could not work on open domain. The SAOKE data set provides an opportunity to extend the ability of these models into open domain.

6.5. Duality between Knowledge and Language

As mentioned in above sections, the SAOKE data set provides examples of dual mapping between facts and sentences. Duality has been verified to be useful to promote the performance of agents in many NLP tasks, such as back-and-forth translation (Xia2016a, ), and question-answering (tang2017question, ). It is a promising approach to use the duality between knowledge and language to improve the performance of Logician.

7. Conclusion

In this paper, we consider the open information extraction (OIE) problem for a variety of types of facts in a unified view. Our solution consists of three components: SAOKE format, SAOKE data set, and Logician. SAOKE form is designed to express different types of facts in a unified manner. We publicly release the largest manually labeled data set for OIE tasks in SAOKE form. Using the labeled SAOKE data set, we train an end-to-end neural sequence-to-sequence model, called Logician, to transform sentences in natural language into facts. The experiments reveal the superiority of Logician in various open-domain information extraction tasks to the state-of-the-art algorithms.

Regarding future work, there are at least three promising directions. Firstly, one can investigate knowledge expression methods to extend SAOKE to express more complex knowledge, for tasks such as event extraction. Secondly, one can develop novel learning strategies to improve the performance of Logician and adapt the algorithm to the extended future version of SAOKE. Thirdly, one can extend SAOKE format and Logician algorithm in other languages.

References

  • [1] Shubham Agarwal and Marc Dymetman. A Surprisingly Effective Out-of-the-Box Char2char Model on the E2E NLG Challenge Dataset. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, number August, pages 158–163, 2017.
  • [2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation By Jointly Learning To Align and Translate. In Proceedings of ICLR, sep 2015.
  • [3] Michele Banko, Mj Cafarella, and Stephen Soderland. Open information extraction for the web. In IJCAI, pages 2670–2676, 2007.
  • [4] Qingqing Cai and Alexander Yates.

    Large-scale Semantic Parsing via Schema Matching and Lexicon Extension.

    In Proceedings of the 51st Annual Meeting of ACL, pages 423–433, 2013.
  • [5] Kaushik Chakrabarti, Surajit Chaudhuri, Tao Cheng, and Dong Xin. Entitytagger: automatically tagging entities with descriptive phrases. In Proceedings of the 20th International Conference Companion on WWW, pages 19–20, 2011.
  • [6] Wanxiang Che, Zhenghua Li, and Ting Liu. LTP: A Chinese Language Technology Platform. In Proceedings of COLING, pages 13–16, 2010.
  • [7] Andrew Chisholm, Will Radford, and Ben Hachey. Learning to Generate One-sentence Biographies from Wikidata. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, volume 1, pages 633–642, 2017.
  • [8] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on EMNLP, pages 1724–1734, 2014.
  • [9] Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. An analysis of open information extraction based on semantic role labeling. In Proceedings of the sixth International Conference on Knowledge Capture, pages 113–120, 2011.
  • [10] Janara Christensen, Mausam, Stephen Soderland, Oren Etzioni, Mausam, Stephen Soderland, and Oren Etzioni. Towards Coherent Multi-Document Summarization. In Proceedings of the 2013 Conference of NAACL: HLT, number Section 3, pages 1163–1173, 2013.
  • [11] Janara Christensen, Stephen Soderland, and Gagan Bansal. Hierarchical Summarization: Scaling Up Multi-Document Summarization. In Proceedings of the 52nd Annual Meeting of ACL, pages 902–912, 2014.
  • [12] Luciano Del Corro and Rainer Gemulla. ClausIE: Clause-Based Open Information Extraction. In Proceedings of the 22nd International Conference on WWW, pages 355–366, 2013.
  • [13] Li Dong and Mirella Lapata. Language to Logical Form with Neural Attention. In In Proceedings of the Annual Meeting of ACL., pages 33–43, 2016.
  • [14] Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam. Open information extraction: The second generation. In Proceedings of IJCAI, pages 3–10, 2011.
  • [15] Anthony Fader, Stephen Soderland, and Oren Etzioni. Identifying Relations for Open Information Extraction. In Proceedings of the Conference on EMNLP, pages 1535—-1545, 2011.
  • [16] Anthony Fader, Luke S Zettlemoyer, and Oren Etzioni. Open Question Answering Over Curated and Extracted Knowledge Bases. In Proceedings of the 20th ACM SIGKDD, pages 1156–1165, 2014.
  • [17] Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. In Proceedings of the 54th Annual Meeting of ACL, pages 1631–1640, 2016.
  • [18] Rahul Gupta and A Halevy. Biperpedia: An Ontology for Search Applications. In Proceedings of the VLDB Endowment, pages 505–516, 2014.
  • [19] Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. Deep Semantic Role Labeling: What Works and What’s Next. In Proceedings of the 55th Annual Meeting of the ACL, pages 473–483, 2017.
  • [20] Luheng He, Mike Lewis, and Luke Zettlemoyer. Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Language. In Proceedings of the 2015 Conference on EMNLP, pages 643–653, 2015.
  • [21] Marti A. Hearst. Automatic Acquisition of Hyponyms ftom Large Text Corpora. In Proceedings of the 14th conference on Computational Linguistics, pages 23–28, 1992.
  • [22] Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Overview of mini-batch gradient descent. Technical report, 2012.
  • [23] Nanda Kambhatla. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the ACL: Interactive Poster and Demonstration Sessions, 2004.
  • [24] Rohit J Kate, Yuk Wah, and Wong Raymond. Learning to Transform Natural to Formal Languages. In Proceedings of the 20th AAAI, pages 1062—-1068, 2005.
  • [25] Tushar Khot, Ashish Sabharwal, and Peter Clark. Answering Complex Questions Using Open Information Extraction. In Proceedings of the 55th Annual Meeting of the ACL, pages 311—-316, 2017.
  • [26] Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. Scaling Semantic Parsers with On-the-fly Ontology Matching. In Proceedings of the 2013 Conference on EMNLP, pages 1545–1556, 2013.
  • [27] Jinyang Li, Chengyu Wang, Xiaofeng He, Rong Zhang, and Ming Gao. User Generated Content Oriented Chinese Taxonomy Construction. In Lecture Notes in Computer Science, volume 9313, pages 623–634. 2015.
  • [28] Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. Neural Relation Extraction with Selective Attention over Instances. In Proceedings of the 54th Annual Meeting of ACL, pages 2124–2133, 2016.
  • [29] Christopher D Manning, John Bauer, Jenny Finkel, Steven J Bethard, Mihai Surdeanu, and David McClosky.

    The Stanford CoreNLP Natural Language Processing Toolkit.

    In Proceedings of 52nd Annual Meeting of ACL: System Demonstrations, pages 55–60, 2014.
  • [30] Mausam. Open Information Extraction Systems and Downstream Applications. In Proceedings of the 25th IJCAI, pages 4074–4077, 2016.
  • [31] Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th IJCNLP, volume 2, page 1003, 2009.
  • [32] Makoto Miwa and Mohit Bansal. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. In Proceedings of the 54th Annual Meeting of ACL, pages 1105–1116, 2016.
  • [33] Harinder Pal and Mausam. Demonyms and Compound Relational Nouns in Nominal Open IE. In Proceedings of the 5th Workshop on AKBC, pages 35–39, 2016.
  • [34] Likun Qiu and Yue Zhang. ZORE : A Syntax-based System for Chinese Open Relation Extraction. In Proceedings of the 2014 Conference on EMNLP, pages 1870–1880, 2014.
  • [35] John W Ratcliff and David E Metzener. Pattern Matching: The Gestalt Approach. Dr Dobb’s, 13(7), 1988.
  • [36] Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. Relation Extraction with Matrix Factorization and Universal Schemas. Proceedings of the 2013 Conference of NAACL: HLT, (June):74–84, 2013.
  • [37] Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. Relation Extraction with Matrix Factorization and Universal Schemas. In Proceedings of the 2013 Conference of NAACL: HLT, number June, pages 74–84, 2013.
  • [38] Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on EMNLP and CoNLL, pages 523–534, 2012.
  • [39] Stephen Soderland, Brendan Roof, Bo Qin, and Shi Xu. Adapting Open Information Extraction to Domain-Specific Relations. AI Magazine, 31(3):93–102, 2010.
  • [40] Gabriel Stanovsky and Ido Dagan. Creating a Large Benchmark for Open Information Extraction. In Proceedings of the 2016 Conference on EMNLP, pages 2300–2305, 2016.
  • [41] Gabriel Stanovsky, Ido Dagan, and Mausam. Open IE as an Intermediate Structure for Semantic Tasks. In Proceedings of the 53rd Annual Meeting of ACL and the 7th IJCNLP, pages 303–308, 2015.
  • [42] Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027, 2017.
  • [43] Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. Modeling Coverage for Neural Machine Translation. In Proceedings of the Annual Meeting of ACL, pages 76–85, 2016.
  • [44] Vered Shwartz, Yoav Goldberg, Ido Dagan, Vered Shwartz, Yoav Goldberg, and Ido Dagan. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of the 54th Annual Meeting of ACL, pages 2389–2398, 2016.
  • [45] Pavlos Vougiouklis, Hady Elsahar, Lucie-Aimée Kaffee, Christoph Gravier, Frederique Laforest, Jonathon Hare, and Elena Simperl. Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 2017.
  • [46] Chengyu Wang and Xiaofeng He. A Short Survey on Taxonomy Learning from Text Corpora : Issues , Resources and Recent Advances. In Proceedings of the Conference on EMNLP, 2017.
  • [47] Wikipedia. Assignment problem— Wikipedia, The Free Encyclopedia, 2017.
  • [48] Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. Challenges in Data-to-Document Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2243–2253, 2017.
  • [49] Fei Wu and Daniel S Weld. Open Information Extraction using Wikipedia. In Proceedings of the 48th Annual Meeting of ACL, pages 118–127, 2010.
  • [50] Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Q Zhu. Probase: A probabilistic taxonomy for text understanding. In Proceedings of the 2012 ACM SIGMOD, pages 481–492, 2012.
  • [51] Yingce Xia, Di He, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. Dual Learning for Machine Translation. In Advances in Neural Information Processing Systems 29, pages 1–9, 2016.
  • [52] Mohamed Yahya, Steven Euijong Whang, Rahul Gupta, and Alon Halevy. ReNoun : Fact Extraction for Nominal Attributes. In Proceedings of the Conference on EMNLP 2014, Doha, Qatar, pages 325–335, 2014.
  • [53] Pengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. Neural Enquirer: Learning to Query Tables. In In Proceedings of the Annual Meeting of ACL, pages 29–35, 2016.
  • [54] Dmitry Zelenko, Chinatsu Aone, Anthony Richardella, Jaz Kandola, Thomas Hofmann, Tomaso Poggio, and John Shawe-Taylor. Kernel Methods for Relation Extraction.

    Journal of Machine Learning Research

    , 3:1083–1106, 2003.
  • [55] Luke S Zettlemoyer and Michael Collins. Learning to Map Sentences to Logical Form : Structured Classification with Probabilistic Categorial Grammars. In Proceedings of the 21st Conference on UAI, pages 658–666, 2005.
  • [56] Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme. Proceedings of the 55th Annual Meeting of the ACL, pages 1227–1236, 2017.