Deep Learning Approaches for Question Answering on Knowledge Bases: an evaluation of architectural design choices

12/06/2018 ∙ by Sherzod Hakimov, et al. ∙ 0

The task of answering natural language questions over knowledge bases has received wide attention in recent years. Various deep learning architectures have been proposed for this task. However, architectural design choices are typically not systematically compared nor evaluated under the same conditions. In this paper, we contribute to a better understanding of the impact of architectural design choices by evaluating four different architectures under the same conditions. We address the task of answering simple questions, consisting in predicting the subject and predicate of a triple given a question. In order to provide a fair comparison of different architectures, we evaluate them under the same strategy for inferring the subject, and compare different architectures for inferring the predicate. The architecture for inferring the subject is based on a standard LSTM model trained to recognize the span of the subject in the question and on a linking component that links the subject span to an entity in the knowledge base. The architectures for predicate inference are based on i) a standard softmax classifier ranging over all predicates as output, iii) a model that predicts a low-dimensional encoding of the property given entity representation and question, iii) a model that learns to score a pair of subject and predicate given the question as well as iv) a model based on the well-known FastText model. The comparison of architectures shows that FastText provides better results than other architectures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The task of Question Answering (QA) has received increasing attention in the last few years. Most research has concentrated on the task of answering factoid questions such as Who wrote Mildred Pierced?, yielding the answer Stuart Kaminsky. Typically, such answers are extracted from a knowledge base (KB). A frequently used dataset in this context is the SimpleQuestions [2] dataset, which consists of simple questions that can be answered with a single fact from the Freebase KB. For instance, the question above can be answered using the following triple from Freebase:

Subject:   m.04t1ftb (mildred_pierced)
Predicate: book.written_work.author
Object:    m.03nx4yz (stuart_kaminsky)

The system needs to identify the relevant entity (subject), i.e. mildred_pierced in the example question, and infer the appropriate predicate, i.e. book.written_work.author. In the case of SimpleQuestions, all questions involve a single triple, with the answer being the corresponding object. Thus, the task involves essentially predicting the subject and predicate of a triple.

Many different architectures have been proposed for this task, in particular many deep learning architectures. However, a systematic comparison of different architectural choices has not been provided so far. In particular, different property predicting systems have used different approaches to identifying the entity, so that they are not directly comparable.

Using a common model for entity prediction based on an NER architecture, we consider four different architectures for the predicate prediction task:

  • BiLSTM-Softmax: this architecture uses a standard BiLSTM softmax classifier to predict the property in a question where the output ranges over all properties seen during training.

  • BiLSTM-KB

    : instead of using softmax layer output, this model predicts a low-dimensional representation of predicates that match to the closest predicate representation in pre-trained KB embeddings; the closest property is found using cosine similarity.

  • BiLSTM-Binary: this architecture outputs a binary decision on whether a pair of subject and predicate matches for the given question (true or false).

  • FastText-Softmax: this architecture uses FastText111https://github.com/facebookresearch/fastText as a classifier to predict the property.

As a main contributions in this paper are:

  • Most systems do not report the performance of their individual components but just the overall score. This makes it hard to compare them on the sub-task level (entity linking, predicate classification, answer selection). We provided evaluations for all components in isolation under the same conditions.

  • We compare different architectural choices to evaluate the performance of predicting the predicate for a given question answering task.

  • We emphasize the importance of entity linking component and show how it affects overall performance on question answering task.

The paper is structured as follows: the next Section II describes our NER-based system for predicting the entity as well as the four architectures are described. Section III presents the results of our evaluation along with error analysis and discussion. Before concluding, we discuss the related work.

Ii Methods

The task of answering simple questions requires identifying the correct entity and the predicate in the question. In this section, we describe in detail the model for identifying the span of the entity and candidates. Then, we describe four architectures for predicate prediction that build on this common entity prediction model. All four architectures rely on a candidate retrieval step that extracts candidate pairs of subject and predicate and then score pairs of subject/predicate to predict a query consisting of a single subject and predicate. The process is shown in Figure 1. In order to retrieve entity candidates we rely on an inverted index the construction of which we detail in the section below.

[width=0.50]images/candidate_retrieval.pdf

Fig. 1: Visualization of a candidate pair generation process where named entities are queried on surface forms from the knowledge base

Ii-a Inverted Index Construction for Entity Retrieval

We extract all entity mentions from Freebase using type.object.name and common.topic.alias predicates. During the extraction process, we also counted how often a surface form occurs together with an entity. As a result, we generated a surface form index for each subject with an associated frequency value. Additionally, we merged a surface form index created for DBpedia entities using owl:sameAs links. Hakimov et al. [7] provides such an index of surface forms. We converted the DBpedia URIs into Freebase MIDs using the links provided by the DBpedia release of 2014222http://oldwiki.dbpedia.org/Downloads2014#links-to-freebase. The converted index was merged with the index data extracted from Freebase. We aggregated the frequency values if the same surface form and Freebase URI (MID) existed in both indexes.

A sample from this index is given below. All surface forms in the index are normalized; they are converted into lowercase, punctuation as well as non-alpha-numeric characters are removed, etc.

Surface Form URI Frequency
mildred pierced m.04t1ftb 11
mildred pierced m.04t_038 8
mildred pierced m.0cgv06r 7

Ii-B Named Entity Recognition

We trained a Named Entity Recognizer (NER) system similar to the one proposed by Chiu and Nichols [4] using weak supervision, for which Raj [13] provided the implementation. Since the dataset requires a single subject we adapted the NER to identify a single entity span.

The original approach is tailored towards identifying common named entity (NE) types: LOCATION, PERSON, ORGANISATION, MISCELLANEOUS. Our goal is extract the single named entity span without doing any distinction between those types. We use an IO tagging scheme to mark tokens inside (I) and outside (O) of the single named entity of interest.

We merge the consecutive tokens that have I as an output. This process is illustrated in Figure 2. The predicted output shows that tokens Mildred and Pierced get assigned the output I while other tokens get O as an assigned label.

[width=0.45]images/ner.pdf

Fig. 2: Named Entity Recognition using Bidirectional LSTM

The architecture is based on Bidirectional LSTMs (BiLSTM) [6]. It is composed of two LSTM [8]

layers. The model uses words and characters as features along with case of words (lowercase, uppercase). These features are concatenated and fed into a neural network.

The input sentence is tokenized. Each token in the sentence is converted into a word embedding representation using Glove [12]vectors (100 dimensional). Each token is also represented in terms of characters by converting the token into a matrix where each vector corresponds to a one-hot encoding vector of a character. The character matrix is fed into a Convolutional Neural Network (CNN) [10]

. The CNN applies a convolution function to input vectors. We apply a Max-Pooling layer on the CNN output layer that represents the most important character embeddings given the token. This process is shown in Figure 

3

. A sigmoid function is applied to the output layer to infer the maximally scoring label for each token.

[width=0.45]images/cnn_ner.pdf

Fig. 3: CNN Max Pooling application on Character Embeddings

As the SimpleQuestions dataset does not explicitly provide the subjects, we rely on weak supervision to infer the subject during training process. We infer the position of the subject by querying the inverted index for each n-gram in the question. We assume that the correct subject span is the one that matches the expected subject URI when queried on an index. An algorithm for inferring the span of a subject is given in Algorithm 

1. These inferred token labels are used as expected output labels from the NER model.

The NER model is trained for 15 epochs, the embedding size of the BiLSTM was 300, the CNN networks uses 3 kernels.

1:procedure Find-Span() input sentence s, the expected URI u and maximum ngram size m
2:      extracts all possible n-grams from the input s
3:      =
4:     for each item in  do
5:          = retrieve_candidates()
6:         if  is in  then
7:               =
8:              Break stops the loop if the expected URI is found
9:         end if
10:     end for
11:     return The inferred span for a given URI
12:end procedure
Algorithm 1 Inferring Named Entity Spans

Ii-C Candidate Pair Generation

As shown in Figure 1, we apply the trained NER system and extract the entity mention, i.e. Mildred Pierced in our example. The extracted mention is queried on the surface form index. All the matching entries are added to the set . Each entry contains a subject URI (Freebase MID) and a frequency value. For example, the following subjects are found: m.04t1ftb, m.01d13qs, m.04t_038, m.0cgv06r.

We define a as a set of triples of the form that appear in the Freebase-2M dataset. Given a subject we define the set of all the properties that has as

We further define the set of candidate pairs for mention as:

For example, the extracted candidate entity m.01d13qs has 2 predicates: music.release_track.release, music.release_track.recording. By combining the predicate with the candidate entity we generate candidate pairs (see Figure 1).

The next step is to find a ranking function that takes an input question text (q), the identified mention and candidate pairs (={(, ), (, ), (, ), …, (, )}), and returns the highest ranking pair (, ).

(1)

where

computes the probability of a pair

and using the equation below.

(2)

where is the probability of predicate as computed by our four predicate models described below. is the probability of a subject computed by normalizing the frequency scores retrieved for the mention .

In the following sections, we describe our proposed models for the prediction of target predicates.

Ii-D Model 1: BiLSTM-Softmax

Our first model is a BiLSTM classifier that predicts the target predicate given the question text. This is a standard model to predict multiple class labels using a softmax layer by encoding the input text using word and character embeddings. Before passing the question text to our network, we replace the entity name with a special placeholder token e (e.g. “Who wrote e?”) that abstracts away the (inferred) subject mention. Moreover, the model is very similar to the one proposed by [15].

Ii-D1 Architecture

Similar to the NER model, the question text is encoded on the word and character level. Character-level word embeddings are computed by applying a CNN layer with Max-Pooling on the characters of each token. This process is the same as explained above in Figure 3

. Word and character embeddings are concatenated and passed through a BiLSTM layer. The final states of the BiLSTM are concatenated and fed into a feed-forward layer with softmax activation function, which calculates a probability distribution over a set of predicates. We identified 1629 predicates in the training split of the SimpleQuestions dataset. The architecture is shown in Figure 

4.

[width=0.45]images/model1.pdf

Fig. 4: BiLSTM-Softmax model that computes probability distributions for predicates given only the question text

The model assigns a probability for each predicate. During candidate pair generation we extract subjects along with frequency values. These frequency values are normalized so that we yield a proper probability for each subject given a question and a mention . The score for each candidate pair is calculated by multiplying the probability score of the candidate predicate with the normalized frequency value of the candidate subject as given in Equation 1.

Ii-D2 Hyper-parameters

The CNN layer uses an embedding size of 100, the LSTM layer uses 200 dimensions; Word embeddings are initialized using 100 dimensional Glove vectors and are retrained with the rest of the model. The model is trained for 100 epochs.

Ii-E Model 2: BiLSTM-KB

In this subsection, we present a different approach for predicting a predicate from a given question that incorporates pre-trained graph embeddings into the classification process. Before we describe this model architecture, we first introduce how these graph embeddings are computed.

Ii-E1 Graph Embedding

There have been different approaches proposed over the years for computing embeddings for knowledge bases. RDF2Vec [14] is such a method. By performing a random walk on the graph, the algorithm records paths between pairs of entities. The resulting paths are considered as ”sentences” and are fed into the popular word embedding algorithm word2vec which computes vector representations for vertices and edges.

TransE [3] is another method for computing graph embeddings. The method is based on taking a single triple, e.g. and creating corrupted triples from it by randomly replacing the subject or the object with a random entity from the KB. The objective of the method is to learn a ranking function that maximizes the margin between the score of an actual triple and the corrupted triples.

In this work, we compute KB embeddings using FastText [9]. We phrase the task of learning KB embeddings as a classification task. For each triple in the KB, we construct training samples for the FastText classifier by treating the predicate and the object as input tokens and subject as the target class. To create embedding vectors that are aware of the role of an entity in a triple, we generate the training sample using role-specific embeddings: , and . Here, indicates that the target is an entity in the subject position, is an input entity in the object position and an input predicate used for predicting a subject entity. Analogously, we create a training sample with the object being the target class. An example in the FastText format for the triple Inferno, hasAuthor, Dan_Brown is given below:

  • __label__Inferno hasAuthor Dan_Brown

  • __label__Dan_Brown hasAuthor Inferno

By training a FastText classifier on the generated training samples, we obtain vector representations for all entities and predicates with respect to their role in the triple333Due to the huge amount of target classes, training the classifier with a full softmax objective is not feasible. Instead, we use the negative sampling objective that is part of the FastText toolkit as an approximation to the softmax objective.. We chose FastText as a classifier for its good performance on text classification tasks in terms of accuracy and speed.

Ii-E2 Architecture

In the following, we describe a neural network model that uses the pre-trained graph embeddings to predict the target predicate given a question text. The intuition is that we can project the question text into the embedding space of the KG, thus supporting the learning process by utilizing the pre-trained, latent structure of that space. Additionally, the model is not limited to predicates seen during training whereas BiLSTM-Softmax outputs probability distribution to predicates that only appear in the training split.

Similar to the model in Figure 4, the question text is encoded using word and character level embeddings. The encoded text is fed into a BiLSTM layer that outputs a sequence of hidden states. We concatenate the last states of the forward and backward LSTM and pass it through a feed-forward layer which produces a fixed-sized output vector of 200 dimensions. The network is trained to maximize the cosine similarity of the produced output vector and the pre-trained embedding vector of the target predicate.

During prediction we compute the cosine similarity of the computed output vector to the embeddings of all predicates in Freebase-2M and normalize across all predicates to obtain a probability distribution.

The score for a candidate pair is computed as given in Equation 1.

[width=0.5]images/model2.pdf

Fig. 5: BiLSTM-KB model that computes probability distributions for predicates given only the question text

Ii-E3 Hyper-parameters

The CNN layer has 100 dimensions, the LSTM has 400 dimensions. We use 100-dimensional Glove vectors.

Ii-F Model 3: BiLSTM-Binary

This model is different than the other 2 models explained above (see Section II-D and Section II-E) in terms of the input to the model. While BiLSTM-KB introduces external knowledge about predicates from a knowledge base, this model learns to associate the question text with the tokens in predicate URI. The input is composed of a question text and the label of a single predicate and the model outputs a binary decision (0 or 1) indicating if the predicate is correct for the question. By giving the label of a predicate as an input feature, the model can potentially use the similarity between the question text (e.g. Who wrote e?) and the predicate label (e.g. book.written_work.author) to determine if the given predicate tokens matches the question text.

Ii-F1 Architecture

The inputs and are tokenized and fed into encoding layer that uses word and character embeddings. These are shown as BiLSTM Text Encoding. The encoding is the same process explained in Section II-D(see Figure 4) where the tokens are represented by word and character embeddings and fed into 2-layer BiLSTM.

The tokenization of the predicate is done by splitting the URI by dot and underscore characters. The latent embeddings are fed into an intermediate layer, which learns to score the compatibility between (embedded) question input and predicate . Finally, the output layer is a sigmoid function that outputs a binary decision in terms of probability. The model architecture is depicted on Figure 6.

[width=0.45]images/model3.pdf

Fig. 6: BiLSTM-Binary model that computes probability distributions as a binary decision given the question text and the predicate pair

During prediction we collect all predicates from each candidate subject and feed them into the model one at a time. The model outputs a probability for each predicate. The score for a candidate pair is computed using Equation 1. The highest scoring pair is selected as the final output.

Ii-F2 Hyper-parameters

We use a CNN with 100 dimensions, and an LSTM with 400 dimensions. We use 100-dimensional Glove vectors. The model is trained for 100 epochs.

Ii-G Model 4: FastText-Softmax

For our last model, we train a classifier that predicts the target predicate given the question text using FastText. The FastText tool implements a linear classifier on top of a bag-of-N-gram representation of a text using word N-grams to preserve local word order and character N-grams for robustness against out-of-vocabulary words. The model outputs a probability for each predicate. The score for a candidate pair is computed using Equation 1. The highest scoring pair is selected as the final output. For a detailed description of the model architecture we refer to [9].

Ii-G1 Hyper-parameters

Due to the moderate size of the target vocabulary4441629 predicates in the training set. we can train the classifier with a full softmax objective. We trained the classifier for 50 epochs and a hidden layer size of 100. The classifier uses word N-grams of size 1 and 2 and character N-grams of size 5.

Iii Evaluation

We provide evaluations on four models and the building components in isolation as follows:

  1. Named Entity Recognition: the evaluation shows the accuracy for extracting the correct mention from the question text.

  2. Named Entity Linking: the evaluation shows in how many cases the subject can be retrieved by index lookup using the detected entity mention from the NER step.

  3. Predicate Prediction: this evaluation shows how well the four models perform in predicting the correct predicate for the given question text.

  4. Answer Prediction: this evaluation shows how well the proposed models perform on predicting the correct triple and how they compare to other systems on SimpleQuestions dataset.

Iii-a Named Entity Recognition

Training

We trained a BiLSTM-CRF NER system on SimpleQuestions training split. The model was run for 100 epochs, with word embeddings from Glove 100-dimensional vectors, 200 dimension for LSTM layers.

Prediction

During prediction, we queried all possible n-grams extracted from question text on a surface form index. N-grams that returned a match were added to a set . A question text was given as an input to the trained NER system. The output from NER system was compared with each n-gram in . The comparison is based on Edit distance similarity. The N-gram that is the most similar to the output is taken as a recognized subject mention . In this way we can ensure that output mention from NER maps to some set of subjects. The system is regarded as having correctly identified a certain mention if by looking up the mention in the index the correct subject is returned. For instance, in the Figure 1 the NER system identifies “mildred pierced” as a entity mention. By querying the entity mention we retrieve 4 subjects. The expected target subject m.04t1ftb is in the list. The NER component achieves an accuracy of 0.82.

Iii-B Named Entity Linking

Once the subject mention has been extracted from the NER system, the next step is to get all the matching subjects from the surface form index. We queried the mention on an index and retrieved subjects with corresponding frequency values.

For evaluation, we ranked the subjects by their frequency values and calculated Recall@K. The system correctly links if the target subject is in the ranked list of K candidate subjects. The results are shown in Table I.

K Recall
1 0.68
2 0.74
5 0.79
10 0.81
25 0.82
100 0.82
400 0.82
TABLE I: Named Entity Linking evaluation on test split using Recall@K

Iii-C Predicate Prediction

All models described above compute probability distributions for predicates. To understand better the building blocks of each model, we evaluated the performance of each model for predicting the correct predicate. Below in Table II, we listed the results for BiLSTM-Softmax, BiLSTM-KB, BiLSTM-Binary, and FastText-Softmax. We trained different models with different hyper-parameters. In Table II, we listed only the best performing models of each type with respective hyper-parameters and their performance scores. The performance score is Accuracy and it was calculated by excluding the subject from the pair and comparing only predicted and expected predicates. As shown in Table II, FastText-Softmax output performs all other systems while BiLSTM-Softmax and BiLSTM-Binary performed similarly.

Name Accuracy
BiLSTM-Softmax 0.74
BiLSTM-KB 0.68
BiLSTM-Binary 0.73
FastText-Softmax 0.79
TABLE II: Evaluation of four models on predicate prediction task

Iii-D Answer Prediction

The task of question answering on the SimpleQuestions dataset requires a system to output a single triple consisting of a subject and a predicate. We evaluated the four proposed models on prediction of a triple consisting of a subject and a predicate. The predicated pairs are ranked using Equation 1.

Moreover, we compared our results with other published systems that evaluated using the same dataset. All results are shown in Table III.

Name Accuracy
BiLSTM-Softmax 0.67
BiLSTM-KB 0.61
BiLSTM-Binary 0.66
FastText-Softmax 0.68
TABLE III: Evaluation of four models on answer prediction task

Iii-E Error Analysis

We choose BiLSTM-Softmax to perform error analysis and highlight the errors the model makes. In Table IV, we report the pair prediction results for BiLSTM-Softmax using Recall@K. We extract K top-ranking pairs as given by the model and evaluate how well the system performs on pair prediction. Additionally, we evaluate separately how the subject in the predicted pair compares to the subject of the expected pair. We perform the same evaluate on predicates as well.

We can observe that BiLSTM-Softmax predicts the correct predicate with 0.74 for Recall@1 and 0.8 for Recall@2. The predicate prediction has an upper-bound of 0.84, which was obtained by Recall@20. Subject prediction has a higher performance than pair prediction (0.67 vs 0.74 for Recall@1). Subject prediction has an upper-bound of 0.82 as explained in the previous section (Section III-B). Overall results for pair prediction suggest that the model has the highest margin between Recall@1 and Recall@2. It means that the system could easily reach 0.74 if the ranking function improved.

K Pair Subject Predicate
1 0.67 0.74 0.74
2 0.74 0.78 0.80
3 0.77 0.80 0.81
4 0.78 0.80 0.82
5 0.79 0.81 0.83
10 0.80 0.81 0.83
20 0.80 0.82 0.84
TABLE IV: Recall@K values for BiLSTM-Softmax in Pair Prediction task

Next, we analyzed the type of errors systems do as reported in Table V. In total the system predicted 7206 wrong pairs and 14481 correct pairs. In total there are 21687 test instances. We reported the following type of errors:

  • Only Wrong Predicate: If the predicted predicate is incorrect where the predicted subject is correct compared to the target subject and predicate pair.

  • Only Wrong Subject: If the predicted subject is incorrect where the predicted predicate is correct compared to the target subject and predicate pair. These errors could be caused by NER or the frequency value of a subject.

  • Wrong Subject & Predicate: If both predicted subject and predicate are incorrect compared to the target subject and predicate pair.

  • Empty Prediction: If both predicate subject and predicate are empty.

By picking the highest ranking pair from predictions we compare it to the target pair, if there was a predicted pair. We can see that majority of errors (0.29) are caused by not predicting any pair. The next biggest error mass is in predicting the pair wrong with 0.26. Finally, the system made more errors while predicting the predicate rather than the subject (0.23 vs 0.22).

Error Type Count Percentage
Only Wrong Predicate 1642 0.23
Only Wrong Subject 1591 0.22
Wrong Subject & Predicate 1911 0.26
Empty Prediction 2062 0.29
Total 7206 1.0
TABLE V: Error analysis for BiLSTM-Softmax in Pair Prediction task

Iii-F Discussion

We have shown that our NER step is reasonably accurate at detecting the subject span with an accuracy of 0.82. We have seen that in some cases NER picked the wrong span when the question contains some proper name which is not part of a target span, e.g. “where is mineral hot springs, colorado?” the expected span is “mineral hot springs” while the NER system recognizes the span “springs, colorado”. Similarly, during entity candidate extraction we have seen that sometimes the target subject has a frequency of 1, which affects the candidate pair score.

The models BiLSTM-Softmax and BiLSTM-Binary performed similarly on predicate prediction while BiLSTM-Binary had a margin of 0.6. FastText-Softmax outperformed all models on predicate prediction. For the answer prediction, BiLSTM-Softmax, BiLSTM-Binary and FastText-Softmax performed similarly even though FastText-Softmax had the best performance on predicate prediction with a margin more than 0.5.

While none of the model architectures could outperform the current state-of-the-art systems for the overall answer prediction, we evaluated the building blocks of a question answering system and showed how they perform in isolation. It shows how well each component performs and highlights the importance for comparing different models not just on the overall output performance but also the individual small components.

Iv Related Work

Bordes et al. [2] have presented the first results on the SimpleQuestions dataset. Their approach is based on Memory Networks [16]. It generates candidate entities using n-grams from the question text that match some Freebase entity. The approach corrupts the dataset to generate negative samples by assigning random questions from the dataset to Freebase entity and predicate pairs.

Aghaebrahimian et al. [1] proposed a method for predicting the predicate and subject separately. Their approach uses a 2-layered CNN for ranking predicates. Entity detection is done using the MQL API from Google. Detected entities are disambiguated on the basis of the similarity between the entity’s id and name properties.

Yin et al. [17] proposed an approach that uses Convolutional Neural Networks (CNN) with attentive max pooling along with an entity detection and linking system. Their active linking system is based on training a system that learns to detect the span of an entity and retrieves Freebase entities using the mention only from the detected span. The NER system we propose is similar to their approach. They also proposed to use character embeddings in combination with word embeddings since character embeddings generalize better in handling out-of-vocabulary (OOV) words. The overall approach uses character embeddings for encoding entities and word embeddings for predicates. The predicate prediction part in the architecture use a max pooling layer.

Golub et al. [5] proposed an approach that uses both LSTM and CNN encoders together with character-level embeddings. The question is encoded and fed into a two-layered LSTM with an attention mechanism. Subjects and predicates are also encoded via character-level embeddings and fed into a CNN with two layers. The last layer uses an LSTM layer with attention and outputs a score for a given pair. The authors show that character embeddings generalize better compared to word embeddings (0.78 vs 0.38). The attention layer is shown to be effective, allowing the system to learn to differentiate between entity and predicate spans. We also consider character and words as feature inputs for all our proposed models similar to their approach.

Similarly, Lukovnikov et al. [11] proposed another system that encodes subject and predicate using character and word level embeddings to learn a function that optimizes both subject and predicate assignments by introducing negative samples. Our BiLSTM-Binary uses a similar approach by introducing negative samples for predicate prediction.

Ture et al. [15] proposed a rather simple model based on RNNs without any attention mechanism. They essentially propose to use a model with 2-BiGRU layers for prediction of predicates and a model with 2-BiLSTM layers to predict the span for the subject. Our BiLSTM-Softmax for predicting the property is inspired by their model. However, we could not even come close to reproduce their results with a simplified version of their architecture.

V Conclusion

In this paper, we analyze four different model architectures that are evaluated on the SimpleQuestions dataset using the same Named Entity Recognition and Linking system to facilitate the comparison. The results show how well the building components of a QA system perform in isolation and together in a pipeline.

FastText-Softmax surprisingly achieves the best performance on predicate and answer prediction where a simple model like FastText performs better than more complex LSTM based models. Additionally, BiLSTM-KB introduces external knowledge about predicates in KB but the evaluation results suggest that it does not improve the predicate or the answer prediction.

Acknowledgements

This work was supported by the Cluster of Excellence Cognitive Interaction Technology ’CITEC’ (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG).

References