Learning to Rank Query Graphs for Complex Question Answering over Knowledge Graphs

by   Gaurav Maheshwari, et al.

In this paper, we conduct an empirical investigation of neural query graph ranking approaches for the task of complex question answering over knowledge graphs. We experiment with six different ranking models and propose a novel self-attention based slot matching model which exploits the inherent structure of query graphs, our logical form of choice. Our proposed model generally outperforms the other models on two QA datasets over the DBpedia knowledge graph, evaluated in different settings. In addition, we show that transfer learning from the larger of those QA datasets to the smaller dataset yields substantial improvements, effectively offsetting the general lack of training data.


page 1

page 2

page 3

page 4


Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs

Neural semantic parsing approaches have been widely used for Question An...

FAQ-based Question Answering via Knowledge Anchors

Question answering (QA) aims to understand user questions and find appro...

Contextual Graph Attention for Answering Logical Queries over Incomplete Knowledge Graphs

Recently, several studies have explored methods for using KG embedding t...

Question Answering Over Temporal Knowledge Graphs

Temporal Knowledge Graphs (Temporal KGs) extend regular Knowledge Graphs...

Benchmarking the Combinatorial Generalizability of Complex Query Answering on Knowledge Graphs

Complex Query Answering (CQA) is an important reasoning task on knowledg...

Towards Optimisation of Collaborative Question Answering over Knowledge Graphs

Collaborative Question Answering (CQA) frameworks for knowledge graphs a...

Knowledge Graph Question Answering Datasets and Their Generalizability: Are They Enough for Future Research?

Existing approaches on Question Answering over Knowledge Graphs (KGQA) h...


The increasing maturity of large-scale multi-relational knowledge graphs (KG) like DBpedia [Lehmann et al.2015], Freebase [Bollacker et al.2008] and Wikidata [Vrandečić and Krötzsch2014] has enabled a wide variety of interesting applications. One of them is the problem of complex question answering over knowledge graphs (KGQA), which provides an intuitive natural language driven interface for accessing multi-relational knowledge.

Traditional KGQA approaches [Dubey et al.2016, Berant and Liang2014] use semantic parsing to convert the natural language question (NLQ) to its corresponding formal query which is usually expressed in a formal language such as SPARQL or -DCS. This is done by (i) creating an expression representing the semantic structure of the question, and (ii) aligning this expression with the knowledge graph. While this approach suits the non-trivial task of handling wide syntactic and semantic variations of a question, it assumes the aforementioned tasks to be independent of each other. This assumption can lead to undesirable situations where the system generates expressions which are illegal w.r.t. the given KG.

In this work, we study an alternate family of approaches which treat the KGQA problem as that of generating a set of query paths on the KG and ranking them w.r.t. the given question. These query graph ranking approaches tackle the aforementioned two steps in reverse order, i.e. by first constructing a list of candidate expressions w.r.t. the KG schema, and then using the lexical and semantic structure of the NLQ to select the correct one. This method ensures that all the candidate representations of the question correspond to the target KG structure. We use a custom grammar called query graphs to represent these candidate expressions, comprised of paths in the KG along with some auxiliary constraints.

(a) Question and Query Graph (partially correct)
(b) Correct Query Graph
(c) Core chain corresponding to (b)
Figure 4: A question (a), its corresponding query graph (b), and core chain (c). Here the query graph in (a) is incomplete due to the absence of class constraint.

The primary objective of the study is to empirically investigate the effectiveness of query graph ranking approaches for the KGQA task. Motivated by the success of previous work in this direction [Yih et al.2015, Bao et al.2014], we investigate which ranking models are more suited for the task; what settings they are best trained in; what their limitations are; and explore further steps that can offset them. To that end, we first evaluate simple models as baselines and then appropriate existing models for our task. We also propose a novel slot-matching model which exploits the structure of query graphs by comparing its parts with different representations of the question, computed using self attention. In our experiments, we find that it outperforms the other models. Further, to illustrate the effects of various implicit decisions that go into implementing this approach, we evaluate these models in different settings, namely training in pointwise setting, pairwise setting, and with or without using shared parameters. The secondary objective of the study is to investigate whether transfer learning across similar tasks can increase the performance of the system. We perform all our experiments over two KGQA datasets over DBpedia, namely, LC-QuAD [Trivedi et al.2017] and QALD-7 [Usbeck et al.2017].

The primary contributions of this work are the following:

  • An evaluation of the effectiveness of numerous neural ranking models for ranking query graphs w.r.t. NLQs.

  • An investigation of the effect of transfer learning across two QA datasets.

  • A novel ranking model which exploits the characteristics of query graphs, and uses self attention and skip connections to explicitly compare each predicate in a query graph with the NLQ.

Through our experiments, we find that while the proposed slot matching model outperforms the others, a simple LSTM based encoder gives satisfactorily close results. Moreover, our experiments reveal that pre-training models over LC-QuAD (larger dataset) and fine-tuning them over QALD-7 (smaller dataset) results in substantially better performance on the latter, as opposed to training them solely on QALD-7.


In order to formulate the problem, we first define the notion of a knowledge graph. Formally, let be the set of entities, be the set of all literal values, and be the set of predicates connecting two entities, or an entity with a literal. A knowledge graph is a subset of representing the facts that are assumed to hold.

Problem Formulation

Given a knowledge graph , a natural language question , the system is expected to generate an expression of a formal query language, which returns the intended answer when executed over . Here, is the set of all answers a KGQA system can be expected to retrieve, consisting of (i) a subset of entities () or literals () in , (ii) the result of an arbitrary aggregation function (), or (iii) a boolean () variable.

Query Graph

We use query graphs as the intermediary query language, which represents paths in as a directed acyclic labeled graph. We borrow the augmentations made to the language’s grammar in [Yih et al.2015], which makes the conversion from query graph expressions to executable queries trivial. We further make some changes to the grammar, as described below.

A query graph consists of a combination of nodes {grounded entity, existential variable, lambda variable, auxiliary function}, connected with labeled, directed edges representing the predicates (). Each query graph has one or more grounded entities, which correspond to entities () present in the question . The lambda variable is not grounded to any entity in the KG, but instead acts as a placeholder (variable) for the set of entities which are the answer to the query, and some additional constraints (described below). Similarly, the existential variables aren’t grounded, but are used to further disambiguate the structure of the query. Fig 4.a demonstrates that a question can be represented with a lambda variable, and a grounded entity connected via a labeled edge. Here the lambda variable can have many entities mapped to it, including .

However, this representation is incomplete, as the lambda variable can have other entities like a TV series, or a play mapped to it along with movies, and thus its membership needs to be constrained. These constraints can be enforced by the means of auxiliary functions, defined as follows: (i) they can be of two types, namely, the cardinality function; and a class constraint where (class, rdf:type, owl:Class) . (ii) these functions can only be applied on ungrounded nodes, i.e. lambda and existential variables. An updated query graph, with proper aggregation functions is denoted in Fig 4.b.

Finally, we define a new flag which determines whether the query graph is used to fetch the value of the projected variable, or to verify whether the graph is a valid subset of the target KG. The latter is used in the case of boolean queries like ”Is Berlin the capital of Germany?” We represent this decision with a flag instead of another node or constraint in the graph as it doesn’t affect the execution of the query graph, but only inquires, post execution, whether the query had a solution.

Representation: We choose to represent the query graphs in a linear form so as to easily use them in our ranking models. We linearize the directed graph by starting from one of the grounded entities and using signs to denote the outgoing and incoming edges, respectively. We further externalize the auxiliary functions, and their corresponding ungrounded nodes from the graph, and represent them with another flag along with the linearized chain. Finally, we replace the URIs of entities and predicates with their corresponding surface forms. Hereafter, we refer to this linearized representation as the core chain of a query graph. This representation ensures that the query graph maintains textual relatedness to the source question, enabling us to use a wide variety of text similarity based approaches for ranking them. Fig 4.c illustrates the core chain corresponding to the query graph in our running example.


We use the English version of DBpedia [Lehmann et al.2015], 2016-04 release, as the KG for our question answering system. It is a large scale KG consisting of 4.8M entities, and 580M triples111https://wiki.dbpedia.org/about.

We restrict our system to answer set (), simple count ( and boolean queries ( where , is the set of triples from executing the query, and is the KG).222 is the set indicator function. Further, we restrict this study to queries which do not have more than two edges in the query graph, i.e. the shortest distance between the entity mentioned in , and the intended answer entity in is two. These restrictions ensure a reasonably-sized candidate space, while maintaining enough expressivity to answer all questions in popular KGQA datasets like [Trivedi et al.2017, Berant et al.2013].

To prevent diluting the focus of our study, we assume entities to be given, as standalone entity-linking systems bring more uncertainty in the process and are not reflective of the inherent challenges in ranking query graphs.


The representation defined above enables us to divide the task of query graph construction and ranking in the following three phases: (i) core chain candidate generation, (ii) core chain candidate ranking, and (iii) predicting auxiliary constraints.

Core Chain Candidate Generation

This first step of the process involves generating core chain candidates for the question. Core chains, as described in the previous section, are the linearized subset of the query graphs which represent a path consisting of entities and predicates without the additional constraints. Working under the assumption that the information required to answer the question is present in the target KG, and that we know the entities mentioned in the question, we collect all the plausible paths of up to two hops from the topic entity as the core chain candidate set. Here, the number of hops of a core chain equals the number of predicates in the core chain and the hop in a core chain is the predicate in the core chain, counting from a grounded node.

We retrieve candidate core chains by collecting all predicates (one-hop chains) and paths of two predicates (two-hop chains) that can be followed from an arbitrary grounded node333Entity that has been linked in the question.. In this process, predicates are followed in both forward and reverse direction (and marked with a and in the chain, respectively). For LC-QuaD, we also try to restrict our candidate set of relational chains as follows: if two entities have been identified in the question, only two hop chains are retained that connect the first grounded entity with the second, leaving the answer node in between. Finally, we reject the core chains where all the entities linked in the question aren’t satisfied by its corresponding query graphs. In cases where there are multiple entities, this step substantially decreases the candidate set while retaining all the relevant groundings.

Although we limit the core chains to a length of two hops for the purposes of this study, this approach can be generalized for longer core chains, however it may result in additional challenges because of the exponential explosion of candidate core chains.

Core Chain Candidate Ranking

After generating a set of core chain candidates for a given question, we employ a neural ranking model to select the most plausible core chain. This is done by computing the similarity of a core chain with the input question. Given a question utterance where is the word in the question, and similarly a core chain , we model the scalar similarity score as follows:


where the encoder functions, () and the compare function () are trained jointly. After generating a set of candidate core chains, , we select the most plausible core chain as follows:


We studied several encoding and comparing functions, trained in multiple configurations, as described below.


Encoders translate a core chain or a question into a fixed-length vector representation.

We include some simple models as the baselines for our study, propose a novel slot matching encoder, and appropriate two existing models [Parikh et al.2016, Yu et al.2017], which we find suitable for our task.

Bidirectional LSTM:

We use Bidirectional Long-Short Term Memory

[Hochreiter and Schmidhuber1997] networks (BiLSTM) for encoding both the question and the core chains. The inputs and are treated sequences of words which are embedded () using pre-trained GloVe [Pennington, Socher, and Manning2014] vectors, and are passed to the LSTM layers:


where the functions return the final hidden state of the LSTM.


We also explore using CNN architecture, as proposed in [Kim2014]

, which uses three multi-channel convolutions to create multiple feature vectors of a sequence, which are then max-pooled across time, concatenated, and passed through a linear layer to create its fixed-length representation. They are modelled analogous to the BiLSTM encoders and represented as

and , respectively.

Slot Matching Model (Novel):

The previously described models encode both the question and core chain each into their respective vector representations, which forces complex transformations on both sequences, possibly introducing a bottleneck in optimal performance. In order to alleviate this problem, we propose a more structured encoding scheme which partitions the core chain into hops, and creates multiple representations of the question called slots corresponding to each hop, which are then individually compared.

The proposed model works as follows. First, the question is encoded using a BiLSTM. For the hop in the core chain, we define a trainable slot attention vector that is used as the query vector to compute attention weights over . Note that is shared across all examples. Finally, a slot-specific question representation is computed, by first adding the word vectors to the encoding, and using the corresponding scalar attention weights to summarize over time. This process can be summarized as follows:


We represent the core chains by separately encoding each hop (directions and predicate’s surface form) by another LSTM (), and add skip connections from the embedding layer to it:


where is the sequence of words in the hop of the core chain. Finally, and for the different slots , as just defined, are concatenated to yield and

Note that the model proposed here deviates from cross attention between the input sequences (which we also experiment with, as described below) as, in our case the attention weights aren’t affected by the predicates in the core chain, as the encoder attempts to focus on where a predicate is mentioned in , and not which predicate is mentioned.

Decomposable Attention Model:

[Parikh et al.2016] proposes a novel model for the natural language inferencing task, which computes a summary of two input sequences, weighted by soft cross-attention, signaling the important parts of a sequence w.r.t others. We hypothesize that this local alignment based model, defined as follows, while susceptible to overfitting (due to increased model complexity), might be effective in our use case.

We use the aforementioned BiLSTM layer to encode the question and the core chain, and use the attend-align-compare layers to compute a fixed-length summary vector for both sequences. Then, we use skip connections and concatenate the last states of the encoded vector and the summarized vectors.


Here is the function which calculates the summary vector from the model proposed in [Parikh et al.2016]. Notationally, this encoder, is slightly unlike the rest, as it requires both sequences and as inputs to compute alignments of one sequence against the other.

Hierarchical Residual Sequence Model:

Finally, we appropriate another neural ranking model, originally proposed in [Yu et al.2017] for a task closely aligned to ours, namely, that of ranking KG predicates for relation detection in questions. The model learns to create a hierarchical representation of the question, and a representation of a set of predicates across different granularities (word level, relation level):


where encodes the core chains as a sequence of words , encodes the core chains as a sequence of predicates . Like the other encoders, we define and

Compare Function:

We use a function which computes a single scalar from the vector representations of the question and the core chain respectively. To compute in our experiments, we use (i) the dot product, (ii) a feed-forward layer to encode question and core chain representations and then compute the dot product:


where is a feed-forward layer. The function, defined above, is thus composed using one of the five encoders, and one of these comparison functions.

Predicting Auxiliary Constraints

In this phase, we learn to predict the auxiliary constraints and flags used for constructing a complete query graph. We begin by predicting the intent of the question. In both the datasets in our experiments, a question can ask for the cardinality of the projected variable, ask whether a certain fact exists in the KG, or simply ask for the set of values in the projected variable. Further, this division, hereafter referred to as count, ask and set

based questions, is mutually exclusive. We thus use a simple BiLSTM based classifier to predict the intent as one of the three.

Next, we focus on detecting class based constraints on the ungrounded nodes of the core chain. For instance, in the following question: ”Which movies has Keanu Reeves starred in?”, the word movies constraints the list of everything that Keanu Reeves starred in , including tv shows, plays, etc. In SPARQL, these constraints are expressed as a triple like . We represent them in our query graphs as auxiliary constraints on either the existential or lambda variable. We use two different, separately trained models to predict (i) whether such a constraint exists in the question, and if so, on which variable, and (ii) which class is used as a constraint. The former is accomplished with a simple BiLSTM, akin to the aforementioned intent classifier. For the latter, we use a pairwise ranking based model, specifically the first model mentioned in the core chain candidate ranking section. Further details of training all these models can be found in Approach Evalutaion section.

We now have all the information required to construct the query graph, and the corresponding SPARQL. For brevity’s sake, we omit the algorithm to convert query graphs to SPARQL here, but for limited use cases, simple template matching shall suffice.


In this section, we describe the different experiments we perform, and their results.

Pointwise Pairwise
BiLSTM Dot 0.56 0.64 0.63 0.74 0.68 0.53 0.62 0.59 0.71 0.64
BiLSTM Dense Dot 0.42 0.44 0.50 0.61 0.55 0.45 0.55 0.53 0.65 0.58
CNN Dot 0.37 0.47 0.45 0.57 0.50 0.41 0.51 0.50 0.61 0.55
DAM Dot 0.48 0.58 0.57 0.68 0.62 0.50 0.59 0.58 0.69 0.63
HRM Dot 0.54 0.64 0.62 0.73 0.67 0.47 0.57 0.55 0.67 0.60
Slot-Matching Dot 0.58 0.66 0.65 0.76 0.70 0.58 0.66 0.66 0.77 0.71
(a) Performance on LC-QuaD.
Pointwise Pairwise
BiLSTM Dot 0.30 0.39 0.25 0.41 0.31 0.25 0.37 0.20 0.35 0.25
BiLSTM Dense Dot 0.41 0.26 0.26 0.38 0.31 0.28 0.37 0.22 0.38 0.28
CNN Dot 0.23 0.29 0.12 0.28 0.17 0.25 0.32 0.20 0.35 0.25
DAM Dot 0.35 0.47 0.28 0.43 0.34 0.25 0.37 0.22 0.38 0.28
HRM Dot 0.31 0.41 0.26 0.43 0.32 0.20 0.32 0.20 0.36 0.26
Slot-Matching Dot 0.33 0.43 0.22 0.38 0.28 0.25 0.40 0.17 0.33 0.22
(b) Performance on QALD-7.
Table 3: Performance on LC-Quad and QALD-7. The reported metrics are core chain accuracy (CCA), mean reciprocal rank (MRR) of the core chain rankings, as well as precision (P), recall (R) and the F1 of the execution results of the whole system.

Approach Evaluation

Our first experiment focuses on the accuracy of different ranking models as discussed in the previous section, and their effect on the overall performance of the pipeline.


We use the LC-QuAD [Trivedi et al.2017] and QALD-7-multilingual [Usbeck et al.2017] datasets to evaluate the performance of our system.

LC-QuAD is a gold standard question answering dataset over the DBpedia 04-2016 release, having 5000 NLQ and SPARQL pairs. The coverage of our grammar (as defined in the Background section) covers all kinds of questions in this dataset.

QALD is a long running challenge for KGQA over DBpedia. While currently its version is available, we use QALD-7 (Multilingual) for our purposes, as it is based on the same DBpedia release as that of LC-QuAD. QALD-7 is a gold-standard dataset having 220 and 43 training and test questions respectively along with their corresponding SPARQL queries. This dataset is more diverse than LC-QuAD, with some of the questions are outside the scope of our system. We nonetheless consider all the questions in our evaluation.

Evaluation Metrics

We measure the performance of the proposed methods in terms of its ability to find the correct core chain, as well as the execution results of the whole system. For core chain ranking, we report Core Chain Accuracy (CCA) and Mean Reciprocal Rank (MRR). Based on the execution results of the whole system (including auxiliary components), we also report Precision, Recall and F1.


We train our core chain ranking models both in pointwise and pairwise setting, using negative log likelihood, and max-margin losses respectively. Both are commonly used in ranking problems. In both settings, our models are trained with negative sampling, where we sample 100 negative core chains per question, along with the correct one for every iteration. In the pointwise training setting, the objective is to maximize the score of the correct core chain and minimize the score of the negative samples: , where is 1 if the question-chain pair is correct and 0 otherwise. In contrast, in pairwise training, we maximize the difference of the scores, up to some margin: , where and are the scores for correct and incorrect question-chain pairs, respectively.

Our models are trained for 300 epochs, with early stopping enabled based on validation accuracy. We use a 70-10-20 split as train, validation and test data over LC-QuAD

444in keeping with the train-test splits suggested by the authors.. QALD-7 has a predefined train-test split, we however use one eighth of the train data for validation. In both cases, we do not train on the validation data. We embed the tokens using Glove embeddings555Trained over the common Crawl corpus. 300 dimensions, and with 1.9M tokens in the vocabulary. [Pennington, Socher, and Manning2014], and keep the relevant subset of the embeddings trainable in the model. We use the Adam optimizer, set the learning rate to 0.001, and clip gradients at 0.5. In this experiment, we share parameters between and for the BiLSTM and CNN models which increases their performance. We discuss the effects of parameter sharing in another experiment below.

The intent and rdf-type existence prediction models are trained without negative samples, but with early stopping with the same splits, and the rdf-type class prediction model is trained akin to the core chain ranking models, with negative sampling and early stopping. These models are not trained jointly, which we intend to explore in the future.

Results and Error Analysis

In our experiments, as detailed in Table 3, we observe that the F1 score of almost all models is within a short range of 60% to 71%. The slot matching model performs the best among them, both in pointwise and pairwise settings. Upon closer inspection, we find that the model learns to attend over the entities and predicates in the question as visualized in Fig 5. While the DAM dot also uses attention, its performance generally lags behind the slot matching model. We attribute this to the fact that the DAM dot model tries to create a new representation of the question for each core chain (due to cross-attention between core-chain and question sequences); while a question’s representation does not depend on the corresponding core chain in the slot matching model, thereby helping it generalize better. The performance of the model with encoder is in keeping with recent findings in [Mohammed, Shi, and Lin2017], i.e. a simple recurrent model can perform almost as well as the best performing alternative.

Overfitting is generally observed across all our models trained over LC-QuAD, and is much worse in the case of QALD-7. For instance, the slot matching model, trained over LC-QuAD in pairwise setting has a core chain accuracy of 93.14% over the training data. All our models peformed poorly on QALD-7, with the best being DAM dot. This isn’t surprising given the fact that QALD-7 has only 220 examples in the training set, which is 20 times smaller than LC-QuAD. We will show in the next section that transfer learning across datasets is a viable strategy in this case to improve model performance.

Pointwise Pairwise
(i) (ii) (i) (ii)
BiLSTM Dot 0.49 0.36 0.51 0.31
Slot Matching Dot 0.49 0.41 0.54 0.25
Table 4: CCA for Transfer Learning Experiments.
Figure 5: Visualized attention weights of the slot matching question encoder for the question ”What is the birth place of the astronaut whose mission was the vostok programme?” Here, the first row represents and second

Transfer Learning

As mentioned above, neural ranking models seem to be ineffective when training solely on QALD-7 due to a more varied and noticeably smaller dataset. We hypothesize that using LC-QuAD to pretrain the ranking models might lead to a significant increase in performance.

  • [leftmargin=2em]

  • We pre-train our ranking models over LC-QuAD, fine-tune them over QALD-7’s train split, and evaluate over its test split.

  • We also perform this experiment in a simpler setting, by coalescing the training data across the two datasets and testing over QALD-7.

Both the experiments are conducted over the two best performing encoders: , and using . The results of this experiment are mentioned in Table 4. Here, we only report the core chain accuracies as the rest of the system remains unchanged for the purposes of this experiment.

In our experiments, we find that while both forms of transfer learning improve the performance of the ranking models, fine-tuning is more effective in every setting. While the slot-matching model trained in pairwise setting and fine-tuned for QALD-7 gives the best performance, we observe that pointwise models exhibit a more consistent performance improvement. While fine-tuning our models, the initial learning rate was set to 0.0001, i.e. an order of magnitude less than in the first experiment. Not doing so leads to the models overfitting on the training dataset. For instance, the slot matching model fine-tuned in a pairwise setting with 0.001 learning rate did not show any improvement.

Further Analysis

In order to better assess the impact of different parts of the system, we perform a series of analyses over our two best performing models, i.e. slot matching dot and BiLSTM dot. Also, we only calculate the core chain accuracy in this case unless specified otherwise, and the hyperparameters for each experiment are the same as mentioned in the first experiment.

Pointwise vs Pairwise

As observed in Table (b)b, and 4, pointwise models generally outperform their pairwise counterparts when trained on small datasets but have a comparable performance otherwise. To analyze whether the performance gain in anecdotal, specific to QALD-7, or can be generalized to smaller datasets, we train our models on one fifth of the LC-QuAD’s training split. We find that across multiple runs, BiLSTM dot trained in a pointwise setting consistently outperforms its pairwise counterpart (34% and 31% respectively), and the vice versa holds for the slot matching model (31% pointwise, 36% pairwise). Thus, the results are inconclusive, indicating the fact that performance gain in QALD-7 is due to the innate characteristics of the dataset.

Parameter Sharing

Given that in the first experiment, the BiLSTM dot model shares parameters across the encoders and the slot matching model doesn’t, i.e., , , we intend to find the effect of sharing encoder parameters. To do so, we experiment with both the slot matching dot and BiLSTM dot models, with and without sharing encoder parameters.

We find that sharing parameters between encoders for the slot matching dot model, trained in a pairwise setting results in performance improvement (+2%), thereby making it the best performing model across all settings. However, in a pointwise setting, the performance drops by 6% instead. Similar results are observed when we use use this (pointwise) model in the experimental setup of the second experiment. On the other hand, not sharing parameters across the encoders in BiLSTM dot results in a consistent performance drop (-6%). We attribute this to the siamese nature of BiLSTM dot model, i.e. both encoders process their inputs in the same manner, whereas in the slot matching dot model, uses self attention to create different representations of the inputs whereas

is a simpler recurrent encoder with residual connections.

Auxiliary Component Analysis

In this subsection we discuss the performance of the auxiliary components of our system, namely, intent prediction, rdf-type existence, and rdf-type class prediction models. The intent prediction model solves a relatively easier task of sequence classification between and . This model gives the test accuracy of 99.1% and 91.8% when trained over LC-QuAD and QALD-7 respectively. The rdf-type existence model performs a similar task of predicting whether a class constraint is implied in the question, and if so, on which variable. It performs with the accuracy of 75.3% over LC-QuAD. It’s performance over QALD-7 is as less as 37.2%, due to disproportionately weighted example distribution. Due to this reason, we simply use a pre-trained model trained over LC-QuAD for our purposes. In our experiments, it performs with a 77.0% accuracy. The same holds for the rdf-type class prediction model which predicts the for the class constraint, if applicable. This model performs with 69% on LC-QuAD.

Related Work

The state-of-the-art methods for complex QA over knowledge graphs take primarily three different kinds of approaches - (i) using semantic parsers to create NLQ representations which are then grounded against the KG, (ii) generating grounded candidate representations for NLQ and re-ranking question-graph pairs, (iii) neural sequence decoding models that directly generate a logical form from a given NLQ. We keep our discussion of related literature restricted to the first two.

Traditional semantic parsing based KGQA approaches [Dubey et al.2016, Xu et al.2014, Fader, Zettlemoyer, and Etzioni2014, Berant and Liang2014, Reddy et al.2017, Cui et al.2017] aim to learn semantic parsers that generate ungrounded logical form expressions from NLQs, and subsequently ground the expressions semantically by querying the KG.

In recent years, several papers have taken an alternate approach to semantic parsing by treating it as a problem of semantic graph generation and re-ranking. [Bast and Haussmann2015] compare a set of manually defined query templates against the NLQ and generate a set of grounded query graph candidates by enriching the templates with potential predicates. Notably, [Yih et al.2015]

creates grounded query graph candidates using a staged heuristic search algorithm, and employ a neural ranking model for scoring and finding the optimal semantic graph. The approach we propose in this work is closely related to this.

[Yu et al.2017] use a hierarchical representation of KG predicates in their neural query graph ranking model. They compare their results against a local sub-sequence alignment model with cross-attention [Parikh et al.2016]. We appropriate the models proposed by both [Parikh et al.2016] and [Yu et al.2017] for our task, and compare against the baselines we propose.

Conclusion and Future Work

We studied the performance of neural ranking models for ranking query graphs and showed that this family of approaches can be used for question answering against a KG. We further explored the effects of numerous variations in structure, training, and hyperparameters, while evaluating the models on LC-QuAD and QALD-7. We also proposed a novel task specific ranking model which outperforms the others, in our experiments. Finally, we showed that transfer learning can be effective to offset the lack of training data.

We aim to extend this work by using techniques to transfer knowledge from pre-trained language models, trained over domain agnostic text. Further, we intend to explore mechanisms enabling in-network answer supervision based on differentiable query execution [Cohen2016].


  • [Bao et al.2014] Bao, J.; Duan, N.; Zhou, M.; and Zhao, T. 2014. Knowledge-based question answering as machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, 967–976.
  • [Bast and Haussmann2015] Bast, H., and Haussmann, E. 2015. More accurate question answering on freebase. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, 1431–1440. ACM.
  • [Berant and Liang2014] Berant, J., and Liang, P. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1415–1425. Association for Computational Linguistics.
  • [Berant et al.2013] Berant, J.; Chou, A.; Frostig, R.; and Liang, P. 2013. Semantic parsing on freebase from question-answer pairs. In

    Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

    , 1533–1544.
  • [Bollacker et al.2008] Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, 1247–1250. AcM.
  • [Cohen2016] Cohen, W. W. 2016. Tensorlog: A differentiable deductive database. arXiv preprint arXiv:1605.06523.
  • [Cui et al.2017] Cui, W.; Xiao, Y.; Wang, H.; Song, Y.; Hwang, S.-w.; and Wang, W. 2017. Kbqa: Learning question answering over qa corpora and knowledge bases. Proc. VLDB Endow. 10(5):565–576.
  • [Dubey et al.2016] Dubey, M.; Dasgupta, S.; Sharma, A.; Höffner, K.; and Lehmann, J. 2016. Asknow: A framework for natural language query formalization in sparql. In International Semantic Web Conference, 300–316. Springer.
  • [Fader, Zettlemoyer, and Etzioni2014] Fader, A.; Zettlemoyer, L.; and Etzioni, O. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1156–1165. ACM.
  • [Hochreiter and Schmidhuber1997] Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
  • [Kim2014] Kim, Y. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
  • [Lehmann et al.2015] Lehmann, J.; Isele, R.; Jakob, M.; Jentzsch, A.; Kontokostas, D.; Mendes, P. N.; Hellmann, S.; Morsey, M.; Van Kleef, P.; Auer, S.; et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web 6(2):167–195.
  • [Mohammed, Shi, and Lin2017] Mohammed, S.; Shi, P.; and Lin, J. 2017. Strong baselines for simple question answering over knowledge graphs with and without neural networks. arXiv preprint arXiv:1712.01969.
  • [Parikh et al.2016] Parikh, A.; Täckström, O.; Das, D.; and Uszkoreit, J. 2016.

    A decomposable attention model for natural language inference.

    In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2249–2255. Association for Computational Linguistics.
  • [Pennington, Socher, and Manning2014] Pennington, J.; Socher, R.; and Manning, C. D. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), 1532–1543.
  • [Reddy et al.2017] Reddy, S.; Täckström, O.; Petrov, S.; Steedman, M.; and Lapata, M. 2017. Universal semantic parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 89–101. Association for Computational Linguistics.
  • [Trivedi et al.2017] Trivedi, P.; Maheshwari, G.; Dubey, M.; and Lehmann, J. 2017. Lc-quad: A corpus for complex question answering over knowledge graphs. In International Semantic Web Conference, 210–218. Springer.
  • [Usbeck et al.2017] Usbeck, R.; Ngomo, A.-C. N.; Haarmann, B.; Krithara, A.; Röder, M.; and Napolitano, G. 2017. 7th open challenge on question answering over linked data (qald-7). In Semantic Web Evaluation Challenge, 59–69. Springer.
  • [Vrandečić and Krötzsch2014] Vrandečić, D., and Krötzsch, M. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM 57(10):78–85.
  • [Xu et al.2014] Xu, K.; Zhang, S.; Feng, Y.; and Zhao, D. 2014. Answering natural language questions via phrasal semantic parsing. In Natural Language Processing and Chinese Computing. Springer. 333–344.
  • [Yih et al.2015] Yih, W.-t.; Chang, M.-W.; He, X.; and Gao, J. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 1321–1331. Association for Computational Linguistics.
  • [Yu et al.2017] Yu, M.; Yin, W.; Hasan, K. S.; dos Santos, C.; Xiang, B.; and Zhou, B. 2017. Improved neural relation detection for knowledge base question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 571–581. Association for Computational Linguistics.