Question Answering over Knowledge Graphs via Structural Query Patterns

10/22/2019 ∙ by Weiguo Zheng, et al. ∙ 0

Natural language question answering over knowledge graphs is an important and interesting task as it enables common users to gain accurate answers in an easy and intuitive manner. However, it remains a challenge to bridge the gap between unstructured questions and structured knowledge graphs. To address the problem, a natural discipline is building a structured query to represent the input question. Searching the structured query over the knowledge graph can produce answers to the question. Distinct from the existing methods that are based on semantic parsing or templates, we propose an effective approach powered by a novel notion, structural query pattern, in this paper. Given an input question, we first generate its query sketch that is compatible with the underlying structure of the knowledge graph. Then, we complete the query graph by labeling the nodes and edges under the guidance of the structural query pattern. Finally, answers can be retrieved by executing the constructed query graph over the knowledge graph. Evaluations on three question answering benchmarks show that our proposed approach outperforms state-of-the-art methods significantly.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Querying knowledge graphs like DBpedia, Freebase, and Yago through natural language questions has received increasing attentions these years. In order to bridge the gap between unstructured questions and the structured knowledge graph , a widely used discipline is building a structured query graph to represent the input question such that can be executed on to retrieve answers to the question [4, 29, 10]. To the end, there are two streams of researches, i.e., semantic parsing based methods and template based methods, both of which suffer from several problems as discussed next.

Semantic parsing based methods. The aim of semantic parsing is translating the natural language utterances into machine-executable logical forms or programs [9]. For example, the phrase “director of Philadelphia” may be parsed as , where Director, DirectedBy, and are grounded predicates and entities in the specific knowledge graph. Traditional semantic parsers [27, 24, 13] require a lot of annotated training examples in the form of syntactic structures or logical forms, which is especially expensive to collect for large-scale knowledge graphs. Another problem is the mismatch between the generated logic forms and structures (including entities and predicates) that are specified in knowledge graphs [12, 5, 16]. In order to solve the problems above, several efforts have been devoted to lifting these limitations [26, 3]

. They leverage the knowledge graph in an early stage by applying deep convolutional neural network models to match questions and predicate sequences. It is required to identify the topic entity

and a core inferential chain that is a directed path from to the answer. Then the final executable query is iteratively constructed based on the detected chain. However, it is hard to pick out the correct inferential chains (35% of the errors are caused by the incorrect inferential chains in STAGG [26]). Moreover, it is unreasonable to restrain the chain as a directed path since it may be a general path regardless of the direction in many cases. For instance, Figure 1 presents the query graphs for two questions “: Who is starring in Spanish movies produced by Benicio del Toro?” and “: Which artists were born on the same date as Rachel Stevens?” targeting DBpedia, neither of which contains directed paths. In addition, the search space is uncertain and difficult to determine when to terminate.

(a) Query graph for
(b) Query graph for
Figure 1: Query graphs for questions and

Instead of training semantic parsers, several methods that are built upon the existing dependency parses have been proposed [30, 17]. They try to generate the query graphs according to the dependency parsing results and pre-defined rules. Clearly, it is extremely difficult to enumerate all the rules and eliminate conflicts among the rules.

Template-based methods. A bunch of researches focus on using templates to construct query graphs [20, 8, 1, 28], where a template consists of two parts: the natural language pattern and SPARQL query pattern. The two kinds of patterns are linked through the mappings between their slots. In the offline phase, the templates are manually or automatically constructed. In the online phase, it tries to retrieve the template that maps the input question. Then the template is instantiated by filling the slots with entities identified from the question. The generated query graph is likely to be correct if the truly matched template is picked out. Nevertheless, the coverage of the templates may be limited due to the variability of natural language and a large number of triples in a knowledge graph, which will lead to the problem that many questions cannot be answered correctly. Furthermore, automatically constructing and managing large-scale high-quality templates for complex questions remain open problems.

Our Approach and Contributions. As discussed above, the semantic parsing based algorithms show good scalability to answer more questions, while the template-based methods exhibit advantage in precision. Hence, it is desired to design an effective approach that can integrate both of the two strengths. To this end, there are at least two challenges to be addressed.

Challenge 1. Devising an appropriate representation that can capture the query intention and is easy to ground to the underlying knowledge graph. The representation is required to intuitively match or reconstruct the query intention of the input question. Meanwhile, it is natural to be grounded to the knowledge graph, which is beneficial to improve the precision of the system.

Challenge 2. Completeness of representations should be as high as possible. Although the template-based methods perform good in terms of precision, they suffer the problem of template deficiency in real scenarios. Guaranteeing the completeness of representations is crucial to enhance the processing capacity. Moreover, in order to reduce the cost of building such a question answering system, the representations should be easy to construct.

Rather than using the semantic parsing or templates, we propose a novel framework based on structural query patterns to build a query graph for the input question in this paper. It comprises of three stages, i.e., structural query pattern (shorted by SQP) generation, SQP-guided query construction, and constraint augmentation. In principle, instead of parsing the question into a logic form that is equipped with specific semantic arguments (including entities and predicates), we just need to identify the shape or sketch of ’s query graph in the first stage. It benefits from two folds: (1) The number of structural patterns for most questions is limited so that they can be enumerated in advance. For instance, there are 4 structural patterns for the questions in LC-QuAD [19] that is a benchmark for complex question answering over DBpedia. (2) It is easy to produce a structured pattern with high precision compared to generating complicated logic forms. In the second stage, we build the query graph by extending one entity that is identified according to the question . The construction proceeds under the guidance of the structural pattern. Hence, the search space can be reduced rather than examining all the predicates adjacent to an entity. Furthermore, it is straightforward to determine whether the extension procedure can terminate or not. Finally, the constraints specified in the question are detected to produce the complete structured query for . Note that the procedure involves multiple steps, i.e., SQP generation, entity linking, and relation selection. In summary, we make the following contributions in this paper:

  • We propose a novel framework based on structural query patterns to answer questions over knowledge graphs;

  • We present an approach that generates a query graph for the input question by applying structural query patterns;

  • Experimental results on two benchmarks show that our approach outperforms state-of-the-art algorihtms.

2 Preliminaries

Figure 2: Structural query patterns.

We aim to build a query graph for the question such that can be executed on the knowledge graph, where the query graph is a graph representation of the input question and can be executed on . Formally, it is defined as Definition 2.1.

Definition 2.1.

(Query graph, denoted by ). A query graph for the input question satisfies the following conditions: (1) each node in corresponds to an entity , a value , or a variable; (2) each edge in corresponds to a predicate or property in ; (3) is subgraph isomorphic to by letting the variable in match any node in .

The existing semantic parsing based methods map a natural language question to logical forms that contain phrases from or entities/predicates in the knowledge graph. The template-based algorithms bridge the gap between unstructured and structured by using templates. Different from them, we propose a novel approach by applying structural query patterns.

Definition 2.2.

(Structural query pattern, shorted as SQP). The structural query pattern is the structural representation of the question , that is the remaining structure obtained by removing all the node and edge labels from the query graph of .

Note that the type-node and its adjacent edges are removed as well. We can call a structural query pattern as a query sketch or a shape of the query graph.

Since most questions involve just a few entities in the knowledge graph , the number of structural query patterns is very limited in the real scenario. Therefore, we can enumerate all these patterns in advance. We observe that each structural query pattern of all the questions in LC-QuAD [19], QALD-7 [23], QALD-8 [22], and QALD-9 [21] contains 4 nodes at most. Note that LC-QuAD consists of 5,000 questions, 78% of which are complex questions with multiple relations and entities. The structural query patterns consisting of 4 nodes at most are listed in Figure 2. There are 12 structural query patterns in total, where the first pattern contains only one node as the type nodes are removed from the query graph as well. For instance, the structural query pattern for the question “give me all the automobiles” is . The questions in LC-QuAD involve only structural query patterns -. The questions in QALD-7 and QALD-8 involve structural query patterns - and -, respectively. The questions in QALD-9 involve structural query patterns - and . In comparison, to capture these questions precisely, it requires thousands of templates. Clearly, the structural query patterns exhibit a strong ability to capture the structural representations of natural language questions.

3 Structural Query Pattern Recognition

Given a question in the online phase, we need to produce the structural query pattern (SQP) for . Since the structural query patterns have been listed in advance, we can take the problem of SQP recognition as a classification task, where each entry corresponds to a structural query pattern.

Data preparation. First of all, we prepare the training data carefully to enhance overall performance. Notice that our defined structural query patterns do not contain any specific labels including phrases, entities, predicates, and properties. However, a natural language question consists of a sequence of words carrying semantic meanings. To make it fit the classification model well, we use the syntax tree of each question rather than the question itself. By utilizing the existing syntactic parsing tools, e.g., Stanford parser [7], we can get the syntactic structure of each question. To avoid the effect of specific words, we remove them from the syntactic parsing results and only retain the syntactic tags and their relations.

According to the structure of the SPARQL query, each question is assigned a category label that corresponds to one of the 12 structural query patterns listed in Figure 2. Several question answering datasets are available, such as LC-QuAD and QALD, which provide both questions and the corresponding SPARQL queries. Thus it is easy to collect pairs of syntactic structure and the corresponding category label as the training data.

Model training. In this phase, we train the classification model to predict the category label for an input question. In this paper, we choose two models Text CNN model [11]

and RNN Attention model

[25]

to train the data collected from the previous subsection. The Text CNN model performs well in short text classification. Its basic principle is learning task-specific vectors through fine-tuning to offer further gains in performance. The RNN Attention model imports attention to capture the dependency of long text and can maintain key information from the text. However, these two models can only output a single label.

The output layers of the two models above just return the label with the largest confidence score. In order to increase the ability to deliver the correct label, we modify the two models so that they can assign each label

a confidence score that represents the probability of being the correct label. In the online phase, we use the top-

labels with the highest score.

Model Ensemble. Benefiting from the capability on capturing useful information in long text, the RNN Attention model performs better on complex and long questions which contains more than one entity. In contrast, we find that Text CNN model is better than RNN Attention model on dealing with short and simple questions.

Example 1.

Let us consider the question “Name the city whose province is Metropolitan City of Venice and has leader as Luigi Brugnaro?”. It is a complex long question as multiple relations and entities are involved. The prediction result of Text CNN model is pattern in Figure 2. However, the correct pattern should be pattern as delivered by the RNN Attention model.

Since these two models exhibit different advantages when predicting category label, they can be assembled to make a prediction. We use a simple neural network as the training model to ensemble these two models. A soft max output layer is included to compute the top- structural query patterns with the largest score. The ranked list of SQPs returned by RNN Attention model and Text CNN model for each question are integrated as the training data.

4 Query Graph Generation

Generally, a query graph contains one entity at least, which can help reduce the search space. Hence, we need to identify the entity from the knowledge graph, where corresponds to a phrase (named entity) in the question. It is actually the task of entity linking. Then the query graph is constructed by extending under the guidance of SQP.

4.1 Entity Linking

We perform entity linking that finds the entity corresponding to a phrase in the question from the underlying knowledge graph. Conducting entity linking involves two steps, i.e., identify named entities in the question, and then find their matching entities in the target knowledge graph .

In the first step, we use the named entity recognition (shorted as NER) model

[14] to recognize entity keywords in the given question, where the model is built based on bidirectional LSTMs and conditional random fields. The identified entity keywords are called entity phrases. In the second step, we link the entity phrase to the entity in by computing the similarity between the entity phrase and candidate entities. Note that it is not necessary to identify all the entities in the question as we just need one entity to locate the candidate subgraphs in the knowledge graph.

We observe that there are two problems to be addressed, i.e., phrase truncation and multiple mapping entities. Algorithm 1 outlines the procedure.

(1) Phrase truncation. A phrase may be truncated, which will lead to the false entity phrase and mapping/missing entity. For instance, in the question “Rashid Behbudov State Song Theatre and Baku Puppet Theatre can be found in which country?” a truncated phrase “Song Theatre”, which has no mapping entity in DBpeida, will be identified by the NER model [14]. In contrast, the correct entity phrase should be “Rashid Behbudov State Song Theatre” that is mapped to the entity http://dbpedia.org/resource/Rashid_Behbudov_State_Song_ Theatre. To solve the problem, we extend the entity phrase that is identified by the NER model. As shown in Algorithm 1, each time we add the phrase containing into the set of extended phrase such that , where is the number of words in , and is the maximum length of possible phrases. Then the original phrase and its extended phrases constitute the group of phrases . Since the phrases in are extended based on the identical phrase, only one of them can be correct at most. Instead of determining which one phrase in is correct directly, we resort to their mapping entities in the knowledge graph as discussed next.

(2) Multiple mapping entities. Finding the candidate mappings for each entity phrase by using DBpedia Lookup may return multiple entities. Generally, there is only one entity in that matches an entity phrase in the question. Hence, it is desired to select the correct one. To the end, we compute a matching score for each candidate entity, where the matching score between each entity phrase and candidate entity is computed as shown in Equation 1.

(1)

Input: Input question and knowledge graph
Output: Entity matching a phrase in

1:   identify all the entity phrases using the NER tool
2:  
3:  for each phrase in  do
4:      the phrases containing whose length is not larger than
5:     
6:     
7:     for each phrase in  do
8:        if  can match at least one entity in  then
9:            the candidate entities matching
10:     for each candidate entity in  do
11:        compute the matching score between and
12:      the entity with the largest matching score in
13:     if  then
14:        
15:        ,
16:  return and
Algorithm 1 Entity Linking

As defined above, the matching score consists of three components, i.e., the importance of the entity (denoted as ), the similarity between and (denoted as ), and the relevance between and the evidence text of (denoted as ). They are formally defined as shown in Equations (2)-(4). The parameters , , and are weights of the three components, respectively.

(2)

where denotes the rank of among all the candidate mappings of in terms of their term frequency in text corpus, e.g., Wikipedia documents. The principle is an entity is more important if it appears in more documents.

(3)

where can be computed with the widely used metric, Levenshtein distance [15], for measuring the difference between two strings.

The relevance between containing the entity and the evidence text of , e.g., the corresponding Wikipedia page of entity can be taken as its evidence text. Then we can compute the similarity between the question and each sentence in as shown in Equation (4), where and denote the vector representations of and , respectively.

(4)

Finally, the entity with largest matching score is returned.

4.2 SQP-guided Query Graph Construction

With the predicted structural query pattern and one identified entity , we are ready to construct the query graph. The basic idea is instantiating the pattern through a data-driven search under the guidance of . Specifically, the search starts from the entity node , and retrieves a subgraph that contains and is structurally isomorphic to the structural query pattern by ignoring all the node/edge labels. In order to construct the query graph, two tasks should be completed, i.e., locate the position of in and query graph extension.

Task 1: Locate the position of entity node in the pattern . Although both and can be obtained as discussed above, the position of the node in is unknown. To locate the position of entity in the pattern , we introduce an important observation with the “non-redundancy assumption”: if the question has only one return variable, the words in are all helpful to depict the query intent.

Lemma 4.1.

The entity identified for the question is not an intermediate node in the structural query pattern .

Proof.

The underlying rationale is that the question will contain useless words if is an intermediate node in . The proof can be achieved by contradiction. Let us assume that is an intermediate node. It will lead to a triple at least, where is an entity or a literal string, and is the incident relation. Clearly, this triple contributes nothing to restraining the variable in other triples as they are all constant nodes. It indicates that the node and relation are useless to specify the answers, which contradicts the non-redundancy assumption aforementioned. ∎

Lemma 4.1 works under the premise that the query graph of a given question does not contain any cycles. Note that all the query graphs in the two benchmarks used in the paper are trees. Furthermore, the answers can be retrieved even if the corresponding query graph is not a tree since the tree has fewer constraints than a graph. Then we can refine the answers according to the information in the question that is not covered by the tree pattern.

Example 2.

Assume that is the third pattern, i.e., , in Figure 2 and is the intermediate node. Thus one of the two leaf nodes represents the return variable, and the other node will be an entity or a literal string . It suggests that there is a triple , , or 111The literal node cannot be a starting node of an edge in knowledge graphs., where is the adjacent relation to or . As both and (resp. ) are specified entities (resp. literal string), the three triples will contribute nothing to restraining the variable in the other triple or , where is the incident relation to the variable node . Hence, we can conclude that is not an intermediate node in . The analysis holds for other patterns as well.

As a supporting proof through real data analysis, we find that all the entities are not intermediate nodes in the benchmarks LC-QuAD and QALD.

Task 2: Query graph extension. With the entity and its position in , we build the query graph in this task. The main principle is extending the query graph (initially just an entity node e) gradually by including relations, entities, or variables to under the help of the structure in . The expanding procedure is depicted in Algorithm 2. Note that we select the pattern with the largest confidence score for simplicity.

Input: Entity , input question , structural query pattern , and knowledge graph
Output: Query graph for

1:  ,    
2:   non-intermediate nodes in
3:  while  has unlabeled edges or nodes do
4:     .
5:     if the nodes in have outgoing neighbors then
6:         outgoing relations of .
7:     if the nodes in have incoming neighbors then
8:         incoming relations of .
9:      the relation in that is the most relevant to .
10:     if  corresponds to an entity phrase in  then
11:        assemble and into .
12:     else
13:        assemble variable and into .
14:      the unexplored node adjacent to explored structure .
15:      the entities corresponding to that are adjacent to in .
16:  return
Algorithm 2 Query graph extension

Since a structural query pattern may contain multiple non-intermediate nodes, it is not trivial to determine the correct one. We propose an extension procedure following a data-driven manner. Initially, we make a copy of the structural query pattern . If the non-intermediate nodes in have both incoming edges and outgoing edges, e.g., SQPs 1, 2, 6, 7, 9 and 10 in Figure 2, the incoming relations and outgoing relations of entity will be collected. Otherwise, we just need to consider the incoming or outgoing relations (lines 5-8 in Algorithm 2). Then we compute the relevance between each candidate relation and . The relation with the largest relevance is selected. As a relation may be composed of multiple words, e.g., dateOfBirth, it should be split to get a sequence of words. Then the relevance between and each candidate relation can be calculated by Equation (5), where is a weight ranging from 0 to 1, and represents the th and th words in and , respectively.

(5)
Method LC-QuAD QALD-8 QALD-9
Precision Recall F1-Measure     Precision Recall F1-Measure Precision Recall F1-Measure
Frankenstein 0.480 0.490 0.485 - - - - - -
qaSearch 0.357 0.336 0.344 0.243 0.243 0.243 0.198 0.191 0.193
qaSQP 0.748 0.704 0.718 0.439 0.439 0.439 0.401 0.413 0.405
qaSQP-CE 0.835 0.813 0.827 0.558 0.663 0.620 0.522 0.625 0.568
Table 1: The performance when only the query graphs are generated.

As shown in Equation (5), we use two metrics to measure the relevance between two words and . The first one is the cosine score between two vectors of and which are obtained by training the glove data with the model of word2vec. Two words are semantically closer to each other if their cosine score is larger. The other metric is Levenshtein distance that calculates the edit cost between two words. After obtaining the relation that is the most relevant to as shown in line 9 of Algorithm 2. The specific position of can be determined according to the direction of . Then we can include and into . Taking the node and entities adjacent to the subgraph (that has been labeled with entities, variables, and relations currently) as a starting node, the extension procedure proceeds iteratively until all the nodes and edges have been labeled. Finally, the query graph is returned.

4.3 Constraint Augmentation

Actually, executing the query graph above can return a list of answers which contain the correct one. However, it may generate undesired entities or values as a question may put some constraints on the query graph. For instance, the question “What is the highest mountain in Italy?” specifies the ordinal constraint “highest” on mountains.

We divide the constraints into 4 categories as follows:

  • answer-type constraint, e.g., “which actor”;

  • ordinal constraint, e.g., “highest”;

  • aggregation constraint, e.g., “how many”;

  • comparative constraint, e.g., “larger than”.

Similar to the approaches [26, 3], we employ simple rules to detect these constraints and augment them to the query graph.

5 Experiments

In this section, we evaluate the proposed method systematically and compare it with the existing algorithms.

5.1 Datasets and Experimental Settings

We use DBpedia [2] as the target knowledge graph. DBpedia is an open-domain knowledge graph that consists of 6 million entities and 1.3 billion triples as reported in the statistics of DBpedia 2016-10 .

Two question answering benchmarks LC-QuAD [19] and QALD [23] delivered over DBpedia are used to evaluate our proposed approach.

  • LC-QuAD is a gold standard question answering dataset that contains 5000 pairs of natural language questions and SPARQL queries, 728 of which are simple questions with single relation and single entity.

  • QALD-8 [22] and QALD-9 [21]. QALD is a long-running question-answering evaluation campaign. It provides a set of natural language questions, the corresponding SPARQL queries and answers. QALD-8 contains 219 training questions and 42 test questions. QALD-9 contains 408 training questions and 150 test questions.

We randomly select 500 questions from LC-QuAD as the test data. Our models are trained for 100 epochs, with early stopping enabled based on validation accuracy. We use the 80-20 split as train and validation data.

In RNN Attention model, we set the dimensionality of character embedding to 128 and dropout keep probability to 0.5. The number of hidden units is 128. The number of attention units is 128. The number of hidden size is 1. In Text CNN model, we set dimensionality of character embedding to 128. The number of filters per filter is 128. The dropout keep probability is 0.5. L2 regularization lambda is 0.001.

Following conventions as shown in gAnswer2 [10], the macro precision, recall, and F1-measure are used to evaluate the performance. We compare our method, denoted by qaSQP, with Frankenstein [18], QAKIS [6], QASystem [21], TeBaQA [21], WDAqua [23], gAnswer2 [10], and qaSearch, where qaSearch constructs the query graph following a data-driven search rather than using the SQP as guidance

5.2 Experimental Results

Comparing with the previous methods. Table 1 presents the performance in terms of generated query graphs on the three datasets, where qaSQP-CE represents the proposed method that is fed by one correctly identified entity initially. As can be seen from the table, our proposed method outperforms the existing approach by a large margin 17.4% absolute gain on LC-QuAD. The performance improves further if our system is given one correctly identified entity in DBpedia. The performance on QALD-8 and QALD-9 get worse than that on LC-QuAD. There are two main reasons: (1) QALD-8 and QALD-9 provide less training data; (2) QALD-8 and QALD-9 are more challenging as several questions are outside the scope of the system. Further analysis is discussed in the next subsection.

Table 2 and Table 3 report the question answering results on QALD-8 and QALD-9, respectively222The results of other methods are obtained from the result reports [22] and [21].. It is clear that the proposed method qaSQP outperforms the state-of-the-art competitors greatly. Basically, it benefits from the novel framework of answering questions. Specifically, the query graph is easy to retrieve by reducing the search space under the guidance of the recognized query sketch and one identified entity from the question. In contrast, the competitors are unaware of the query sketch, which will increase the difficulty in constructing the correct query graphs and retrieving the answers. For instance, the method qaSearch performs much worse than qaSQP, which confirms the superiority of SQP-based framework.

Method    Precision    Recall F1-Measure
QAKIS 0.061 0.053 0.056
WDAqua-core0 0.391 0.407 0.387
gAnswer2 0.386 0.390 0.388
qaSearch 0.244 0.244 0.244
qaSQP 0.459 0.463 0.461
Table 2: Question answering results on QALD-8
Method    Precision    Recall F1-Measure
Elon 0.049 0.053 0.050
QASystem 0.097 0.116 0.098
TeBaQA 0.129 0.134 0.130
WDAqua-core1 0.261 0.267 0.250
gAnswer2 0.293 0.327 0.298
qaSearch 0.236 0.241 0.237
qaSQP 0.458 0.471 0.463
Table 3: Question answering results on QALD-9

Evaluation of prediction models. Since a key component of the syetem is introducing the structural query patterns, the models that predict the SQP are very important. So we study the performance of these prediction models. As presented in Table 4, the ensemble model outperforms the two individual models RNN-Attention and Text-CNN in terms of precision, recall, and F1 score on both QALD-8 and QALD-9. It means that the proposed ensemble model is effective.

Method (on dataset) Precision Recall F1-Measure
RNN-Attention (QALD-8) 0.82 0.83 0.82
Text-CNN (QALD-8) 0.79 0.78 0.78
Ensemble model (QALD-8) 0.82 0.86 0.84
RNN-Attention (QALD-9) 0.83 0.78 0.80
Text-CNN (QALD-9) 0.78 0.72 0.75
Ensemble model (QALD-9) 0.85 0.79 0.82
Table 4: Results of prediction models

Effect of SQP recognition and entity linking. The modules of SQP recognition and entity linking are very critical in the proposed system. However, they are not guaranteed to produce the correct SQP patterns or mapping entities. In order to study their effect and the boundary of the question answering ability, we conduct the experiments by providing the correct structural query patterns or mapping entities. Let qaSQP-CP denote the method that is fed by the correct SQP. Let qaSQP-CE denote the method that is fed by one correctly identified entity initially.

As shown in Table 5, all the methods equipped with correct SQPs or entities outperform the original method qaSQP. Note that the results on LC-QuAD are reported with respect to the performance on constructed structural query patterns. We observe that the improvement gained by qaSQP-CP is subtle. Moreover, we can find that qaSQP-CE performs much better than qaSQP-CP on both QALD-8 and QALD-9. It indicates that the system qaSQP can almost find the correct structural query patterns. Meanwhile, there is still much room to improve the initial entity linking.

Method (on dataset) Precision Recall F1-Measure
qaSQP-CP (LC-QuAD) 0.774 0.731 0.744
qaSQP-CE (LC-QuAD) 0.835 0.813 0.827
qaSQP-CP (QALD-8) 0.463 0.488 0.476
qaSQP-CE (QALD-8) 0.537 0.561 0.549
qaSQP-CP (QALD-9) 0.463 0.467 0.465
qaSQP-CE (QALD-9) 0.488 0.502 0.493
Table 5: Effect of SQP recognition and entity lining

Effect of the number of returned patterns . We also study the effect of the number of returned patterns, denoted by , of the prediction model. Figures 3(a) and 3(b) depict the results on QALD-8 and QALD-9, respectively. The parameter is varied from 1 to 3. As shown in the two figures, the precision, recall, and F1 score tend to be stable when is 2 and 3. Hence, is set to 2 by default in our experiments.

(a) Results on QALD-8
(b) Results on QALD-9
Figure 3: Effect of the number of returned patterns

Error Analysis. Although our approach substantially outperforms existing methods, there is still much space for improving the performance on QALD-8 and QALD-9. For instance, besides the errors (29%) caused by entity linking, the precision of predicted structural query patterns is 82% for the test questions in QALD-8. Moreover, many questions in QALD-8 are very challenging. We find that 12 of 42 questions in QALD-8 leave out some important information or require external knowledge to find the correct answers, which increases the difficulty of answering for a system (41%). For example, the question “How big is the earth’s diameter?” cannot be answered directly since there is only a property “meanRadius” in DBpeida. To answer this question, the external knowledge that the diameter is two times the radius is required. The correct SPARQL query should be “select distinct (xsd:double(?radius)*2 AS ?diameter) where res:Earth dbo:meanRadius ?radius. ”. 17% of the errors are caused by incorrect label assignments in query graph extension.

6 Conclusion and Future Work

In this paper, we focus on constructing query graphs for answering natural language questions over a knowledge graph. Unlike previous methods, we propose a novel framework based on structural query patterns. Specifically, we define structural query patterns that just capture the structural representations of input questions. Under the guidance of structural query patterns, the query graphs can be formulated. Our experiments show that the proposed approach outperforms the competitors significantly in terms of building query graphs and generating answers. In the future, we will explore how to eliminate the effect of entity linking throughout the whole system. Applying structured learning techniques to SQP generation will also be investigated.

References

  • [1] A. Abujabal, M. Yahya, M. Riedewald, and G. Weikum (2017) Automated template generation for question answering over knowledge graphs. In WWW, Cited by: §1.
  • [2] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. G. Ives (2007) DBpedia: A nucleus for a web of open data. In ISWC, Cited by: §5.1.
  • [3] J. Bao, N. Duan, Z. Yan, M. Zhou, and T. Zhao (2016) Constraint-based question answering with knowledge graph. In COLING, Cited by: §1, §4.3.
  • [4] J. Berant, A. Chou, R. Frostig, and P. Liang (2013) Semantic parsing on freebase from question-answer pairs. In EMNLP, Cited by: §1.
  • [5] J. Berant and P. Liang (2014) Semantic parsing via paraphrasing. In ACL, Cited by: §1.
  • [6] E. Cabrio, J. Cojan, F. Gandon, and A. Hallili (2013) Querying multilingual dbpedia with qakis. In ESWC, pp. 194–198. Cited by: §5.1.
  • [7] D. Chen and C. D. Manning (2014) A fast and accurate dependency parser using neural networks. In EMNLP, Cited by: §3.
  • [8] W. Cui, Y. Xiao, and W. Wang (2016) KBQA: an online template based question answering system over freebase. In IJCAI, Cited by: §1.
  • [9] M. Gardner, P. Dasigi, S. Iyer, A. Suhr, and L. Zettlemoyer (2018) Neural semantic parsing. In ACL Tutorial Abstracts, Cited by: §1.
  • [10] S. Hu, L. Zou, J. X. Yu, H. Wang, and D. Zhao (2018) Answering natural language questions by subgraph matching over knowledge graphs. IEEE Trans. Knowl. Data Eng. 30 (5). Cited by: §1, §5.1.
  • [11] Y. Kim (2014) Convolutional neural networks for sentence classification. In EMNLP, Cited by: §3.
  • [12] T. Kwiatkowski, E. Choi, Y. Artzi, and L. S. Zettlemoyer (2013) Scaling semantic parsers with on-the-fly ontology matching. In EMNLP, Cited by: §1.
  • [13] T. Kwiatkowski, L. S. Zettlemoyer, S. Goldwater, and M. Steedman (2010) Inducing probabilistic CCG grammars from logical form with higher-order unification. In EMNLP, Cited by: §1.
  • [14] G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, and C. Dyer (2016) Neural architectures for named entity recognition. In HLT-NAACL, Cited by: §4.1, §4.1.
  • [15] V. I. Levenshtein (1966) Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady 10 (), pp. 707–710. Cited by: §4.1.
  • [16] S. Reddy, M. Lapata, and M. Steedman (2014) Large-scale semantic parsing without question-answer pairs. TACL 2. Cited by: §1.
  • [17] S. Ruseti, A. Mirea, T. Rebedea, and S. Trausan-Matu (2015) QAnswer - enhanced entity matching for question answering over linked data. In CLEF, Cited by: §1.
  • [18] K. Singh, A. S. Radhakrishna, A. Both, S. Shekarpour, I. Lytra, R. Usbeck, A. Vyas, A. Khikmatullaev, D. Punjani, C. Lange, M. Vidal, J. Lehmann, and S. Auer (2018) Why reinvent the wheel: let’s build question answering systems together. In WWW, Cited by: §5.1.
  • [19] P. Trivedi, G. Maheshwari, M. Dubey, and J. Lehmann (2017) LC-quad: A corpus for complex question answering over knowledge graphs. In ISWC, Cited by: §1, §2, §5.1.
  • [20] C. Unger, L. Bühmann, J. Lehmann, A. N. Ngomo, D. Gerber, and P. Cimiano (2012) Template-based question answering over RDF data. In WWW, Cited by: §1.
  • [21] R. Usbeck, R. H. Gusmita, A. N. Ngomo, and M. Saleem (2018) 9th challenge on question answering over linked data (QALD-9). In International Semantic Web Conference, pp. 58–64. Cited by: §2, 2nd item, §5.1, footnote 2.
  • [22] R. Usbeck, A. N. Ngomo, F. Conrads, M. Röder, and G. Napolitano (2018) 8th challenge on question answering over linked data (QALD-8). In International Semantic Web Conference, pp. 51–57. Cited by: §2, 2nd item, footnote 2.
  • [23] R. Usbeck, A. N. Ngomo, B. Haarmann, A. Krithara, M. Röder, and G. Napolitano (2017) 7th open challenge on question answering over linked data. In ESWC, Cited by: §2, §5.1, §5.1.
  • [24] Y. W. Wong and R. J. Mooney (2007) Learning synchronous grammars for semantic parsing with lambda calculus. In ACL, Cited by: §1.
  • [25] Z. Yang, D. Yang, C. Dyer, X. He, A. J. Smola, and E. H. Hovy (2016) Hierarchical attention networks for document classification. In NAACL, Cited by: §3.
  • [26] W. Yih, M. Chang, X. He, and J. Gao (2015) Semantic parsing via staged query graph generation: question answering with knowledge base. In ACL, Cited by: §1, §4.3.
  • [27] L. S. Zettlemoyer and M. Collins (2005) Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. In UAI, Cited by: §1.
  • [28] W. Zheng, J. X. Yu, L. Zou, and H. Cheng (2018) Question answering over knowledge graphs: question understanding via template decomposition. PVLDB 11 (11). Cited by: §1.
  • [29] W. Zheng, L. Zou, X. Lian, J. X. Yu, S. Song, and D. Zhao (2015) How to build templates for RDF question/answering: an uncertain graph similarity join approach. In SIGMOD, Cited by: §1.
  • [30] L. Zou, R. Huang, H. Wang, J. X. Yu, W. He, and D. Zhao (2014) Natural language question answering over RDF: a graph data driven approach. In SIGMOD, Cited by: §1.