End-to-End Entity Linking and Disambiguation leveraging Word and Knowledge Graph Embeddings

02/25/2020 ∙ by Rostislav Nedelchev, et al. ∙ University of Bonn Ruhr University Bochum 0

Entity linking - connecting entity mentions in a natural language utterance to knowledge graph (KG) entities is a crucial step for question answering over KGs. It is often based on measuring the string similarity between the entity label and its mention in the question. The relation referred to in the question can help to disambiguate between entities with the same label. This can be misleading if an incorrect relation has been identified in the relation linking step. However, an incorrect relation may still be semantically similar to the relation in which the correct entity forms a triple within the KG; which could be captured by the similarity of their KG embeddings. Based on this idea, we propose the first end-to-end neural network approach that employs KG as well as word embeddings to perform joint relation and entity classification of simple questions while implicitly performing entity disambiguation with the help of a novel gating mechanism. An empirical evaluation shows that the proposed approach achieves a performance comparable to state-of-the-art entity linking while requiring less post-processing.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Question answering is a scientific discipline which aims at automatically answering questions posed by humans in natural language. Simple question answering over knowledge graphs is a well researched topic. Bordes et al. (2015); Yin et al. (2016); Mohammed et al. (2018); Petrochuk and Zettlemoyer (2018) A knowledge graph (KG) is a multi-relational graph which represents entities as nodes and relations between those entities as edges. Facts in a KG are stored in form of triples (h, r, t) where h and t denote the head (also called subject) and tail (also called object) entities, respectively, and r denotes their relation. A simple question is a natural language question (NLQ) that can be represented by a single (subject) entity and a relation. Answering the query then corresponds to identifying the correct entity and relation given in NLQ and returning the object entity of the matching triple. For example, for the question Who is the producer of a beautiful mind? the corresponding KG fact is (a beautiful mind, produced by, Brian Grazer) and the question answering system should be able to link to the correct entity "a beautiful mind" (of type movie) and the relation "produced by" in the KG to answer the question by "Brian Grazer".

The tasks of identifying the KG entity and relation mentioned in the NLQ are called entity and relation linking, respectively. The former is often decomposed into two sub-tasks, firstly detecting the span of the entity mention in the NLQ, secondly to connect the identified mention to a single entity in the KG, which is usually solved by comparing the entity mention to the names of the KG entities based on string similarity measures. This becomes particularly challenging if there exist more than one entity in the KG with the same label (name). However, the context provided for the entity in the KG can be used for disambiguation. In our example, correct relation linking would identify the relation "produced by" as being mentioned in the NLQ. Now one could make use of this information for entity linking by considering only entities which are connected to this specific relation. This would allow to disambiguate and link to the movie "a beautiful mind" rather than the book. This procedure is called soft disambiguation.

However, relation linking is still challenging since the number of relations in many KGs is still large (e.g. 6701 in the FB2M graph subset of the SimpleQuestions dataset Bordes et al. (2015)), while suffering from the problem of unbalanced classes Xu et al. (2016)

. Furthermore, some relations may be semantically similar, for example, "fb:film.film.executive_produced_by" and "fb:film.film.produced_by"; and hence can be confusing for the relation linker. Therefore, relation linking may end up predicting the wrong relation, which would negatively effect relation based entity disambiguation. To encounter this effect, it seems promising to leverage the relation specific information contained in the KG which is represented by the KG embedding of the relation. Semantically similar relations are closer to each other in KG embeddings vector space. So even if a model is not able to predict the correct relation , the semantic information provided by KG embeddings can be employed to perform soft disambiguation of the entity candidates.

Based on these line of thoughts, we propose a novel end-to-end neural network model for answering simple questions from knowledge graphs , that incorporates both word and KG embeddings. Specifically, the contributions of this paper are as follows:

  • The proposal of a novel end-to-end model leveraging relatively simple architectures for entity and relation detection which is comparable to other state-of-the-art approaches for entity linking even without additional post-processing.

  • The (to our knowledge) first investigation of incorporating KG embeddings for leveraging KG structures for the end task of entity linking in an end-to-end manner.

  • A novel gating mechanism incorporated in the end-to-end architecture which can implicitly perform entity disambiguation if required, improving overall entity linking performance. The final prediction is based on vector similarities, which along with the gate’s output can be interpreted during prediction.

The rest of the paper is organized as follows: In Section 2, we summarize the related works on simple question answering. Sections 3 & 4 provide the background and preliminaries important to this work. The overall approach and the architecture is explained in Section 5. In Section 6, we describe the experiment conditions. Evaluation results are discussed in Section 7. We do an ablation study and result analysis in section 8. Finally, we conclude and state the planned future works in Section 9.

2 Related Work

The SimpleQuestions dataset, as proposed by Bordes et al. (2015) is the first large scale dataset for simple questions over Freebase. It consists of 108,442 questions split into train(70%), valid(10%), and test(20%). They also proposed an end-to-end architecture using memory networks along with the dataset.

The second end-to-end approach for simple question answering over Freebase was provided by He and Golub (2016). They proposed a character LSTM based question encoder for encoding the question, a CNN based encoder for encoding the KG entity and relation, and finally an attention-based LSTM decoder for predicting an entity-relation pair given the question. A similar ent-to-end approach was suggested by Lukovnikov et al. (2017)

. It employs Gated Recurrent Unit (GRU) based encoders that work on character and word level and in addition encode the hierarchical types of relations and entities to provide further information.

Furthermore, a growing set of modular architectures was proposed on the SimpleQuestions dataset. Yin et al. (2016)

proposed a character-level CNN for identifying entity mentions and a separate word-level CNN with attentive max-pooling to select knowledge graph tuples.

Yu et al. (2017) utilized a hierarchical residual bidirectional LSTM for for predicting a relation, which is then used to re-rank the entity candidates. They replaced the topic entity in the question with a generic token <e> during relation prediction which helps in better distinguishing the relative position of each word compared to the entity.

Dai et al. (2016) proposed a conditional probabilistic framework with bidirectional GRUs that takes advantage of knowledge graph embeddings. Mohammed et al. (2018)

suggested to use a combination of relatively simple, component-based approaches that build on bidirectional GRUs, bidirectional LSTMs (BiLSTMs), and conditional random fields (CRFs) as well as on graph-based heuristics to select the most relevant entity given a question from a candidate set. The resulting model provides strong baselines for simple question-answering. More recently,

Huang et al. (2019) proposed an architecture based on KG embeddings, Petrochuk and Zettlemoyer (2018) proposed a technique combining LSTM-CRF based entity detection with BiLSTM based relation linking where they also replace the topic entity with generic tokens following Yu et al. (2017).

Some other open-domain knowledge graphs are WikidataVrandečić and Krötzsch (2014) and DBpedia Lehmann et al. (2015). In particular, there are two very recent efforts that provided adaptations of the SimpleQuestions dataset Bordes et al. (2015) to Wikidata Diefenbach et al. (2017) and DBpedia Azmy et al. (2018). In addition, there is the Question Answering over Linked Data (QALD) Unger et al. (2016); Usbeck et al. (2017, 2018) series of challenges that use DBpedia as a knowledge base for QA.

3 Background

Knowledge graphs, word embeddings, and KG embeddings are concepts that are fundamental to this work. Embeddings provide a numerical representation of words and KG entities/relations that facilitate the incorporation of information provided by KG and language into neural networks. Their detailed description follows.

3.1 Knowledge Graphs

In this work, a KG is a network of real world entities that are connected to each other by means of relations. Those entities and their relations are represented as nodes and edges respectively in a multi-relational, directed graph. Knowledge graphs consist of ordered triples, also known as facts, of the form , where and are two entities connected by the relation . Revisiting the example from the introduction, the corresponding fact would be given by ("abeautifulmind" "producedby", "BrianGrazer").

3.2 Knowledge Graph Embeddings

Knowledge graphs are data structures that lack a default numerical representation that allows their straightforward application in a standard machine learning context. Statistical relation learning therefore relies beside other approaches on latent feature models for making prediction over KGs. These latent features usually correspond to embedding vectors of the KG entities and relations. Given the embeddings of the entities and relations of a fact

, a score function outputs the a value indicating the probability that the fact is existing in the KG. In this paper, we use TransE

Bordes et al. (2013) to learn the KG embeddings used by our model. Let the embedding vector of subject and object entity be given by and respectively, and that of the relation by a vector , then the score function of TransE is given by .

3.3 Word Embeddings

In recent times, various approaches were proposed that embed words on a vector space Bengio et al. (2003); Collobert and Weston (2008). These methods create representations for each word as a vector of floating-point numbers. Words whose vectors are close to each other are demonstrated to be semantically similar or related. Especially, two works Mikolov et al. (2013a, b)

showed the high potential of word embeddings in natural language processing (NLP) problems.

In this work, we use the GloVe Pennington et al. (2014) vector embeddings. The method aims to create a -dimensional vector representation of each word in the vocabulary, , and the context of each occurrence of a word in the corpus, so that:

where and are biases that are learned together with and and is the word occurrence count of word in context .

being the documents from which the corpora is extracted from.

4 Preliminaries

We employ two kinds of embeddings in the proposed model namely word embeddings and KG embeddings which are defined in the previous section.

For matching a question to the entities and relations of a KG, likely candidates are first selected in an reprocessing step to reduce the enormous number of candidates in the KG. This is described in the following sections.

4.1 Entity Candidates Generation

We first start with a simple language based candidate generation process selecting potential candidate entities for a given question. That is, given a question we generate candidates by matching the tf-idf vectors of the query with that of the entity labels of all the entities in the knowledge graph, resulting in a list of entity candidates for a given question. This list is then re-ranked based on the tf-idf similarity score, whether the candidate label is present in the question, number of relation the candidate is connected to in the KG and whether it has a direct mapping to wikipedia or not. This is done to give importance to important entities (defined by connectivity in the KG) following previous works Mohammed et al. (2018) Petrochuk and Zettlemoyer (2018).

4.2 Relation Candidate Generation

To generate a set of entity specific relation candidates, for each entity in the entity candidate set we extract the list of relations connected to this candidate at a 1-hop distance in the Freebase Knowledge Graph. For the entity candidate , the relation candidates are .

5 Model Description

Figure 1: Architecture

The proposed neural network model is quintessentially composed of three parts as visualized in 1:

  1. A word-embedding based entity span detection model, which selects the probable words of an entity in a natural language question, represented by a bi-LSTM.

  2. An word-embedding based relation prediction model which links the the question to one of the relations in the knowledge graph, represented by a bi-LSTM with self-attention.

  3. An entity prediction model which takes the predictions of the previous two submodels into account and employs a sentinel gating mechanism that performs disambiguation based on similarity measures.

The different model parts and the training objective of the resulting model will be described in more detail in the following.

5.1 Entity Span Detection

The span detection module is inspired by Mohammed et al. (2018). The given question is firstly passed through a bi-directional LSTM. Its output hidden states are then passed through a fully connected layer and a sigmoid (

) activation function which outputs the probability of the word at time-step

corresponding to an entity (or not). Mathematically, this can be described as

(1)

where, is the output probability, the weight matrix of the fully connected layer, and the hidden state vector from the applying the bi-directional LSTM on the input question . We use I-O encoding for the output for training.

5.2 Relation Prediction

For relation prediction, the question is passed into a self-attention based bi-directional LSTM which was inspired by Zhou et al. (2016). The attention weighted hidden states are then fed into a fully-connected classification layer outputting a probability over the relations in the knowledge graph.

(2)

being a set of model parameters, the self-attention weights and is the Bi-LSTM function which produces a response for every time-step for the input query .

5.3 Entity prediction

5.3.1 Word-based Entity Candidate Selection.

With the help of the entity span identified by the span detection submodule described in 5.1, the questions are now compared to the entity candidates based on vector similarity methods. More specifically, the word embedding of each word of the question is multiplied with corresponding output probability from the entity-span detection model leading to an "entity-weighted" word representation

(3)

where, denotes the word embedding of the -th word in the question and is the sigmoid output from 5.1. We then take a simple average of the entity-weighted representations of all words of the questions to yield the entity embedding of the question .

Similarly, the entity candidates generated in the preprocessing step described in 4.1 are represented by the word embeddings of their labels , ..

. If a label consists of multiple words, the word embeddings are averaged to yield a singe representation. Finally, to compute the similarity between a question and an entity candidate, the cosine between the question embedding and the entity embedding is estimated. For the

candidate, that is

(4)

and the vector represents the word based similarity of the question to all entity candidates.

5.3.2 KG-based Entity Candidate Selection.

To leverage the relational information encoded in the KG, we firstly take the logits over

from the relation prediction model and draw a categorical representation using gumbel softmax Jang et al. (2016). This representation is multiplied with the KG embeddings over to get a KG embedding based representation of the query

. This relation specific representation is then compared against the full relation candidate set of each candidate entity, where each candidate relation is as well represented by its KG embedding. To match the relation specific question representation to relation candidates for a given entity, we estimate the cosine similarity of the corresponding KG embeddings followed by a max-pooling operation over all the candidate relations of an entity which produces an entity specific similarity metric

, which indicates the degree of matching between the question and an entity candidate from a KG perspective, which specifically takes relation information into account. Mathematically, for the -th entity candidate, let the embedding of the -th relation candidate be denoted by . The KG based similarity between the question and the -th entity then given by

(5)

and the vector represents the KG based similarity of the question to all entity candidates.

5.3.3 Disambiguation and final prediction.

The final entity prediction is based on the word- and KG-based similarity measures and First, for disambiguation, the word based similarity vector is passed into a gating mechanism

(6)

with , which aims at estimating if there is more than one single likely candidate in the entity candidate set based on word similarity. If so, the KG based similarity should also be taken into account, which is done by averaging and and predicting the final entity candidate by

(7)

Note that, are the logits over the set of candidate entities, from which the entity with the highest probability can be picked. During inferencing, we perform an additional step for ensuring the entity and relation predicted from the model forms a pair in the KG. In order to achieve that, we take the top 5 probable relation from the relation linker and choose the one which is connected to the predicted entity at 1-hop.

6 Training

6.1 Training objective

The model is trained based on a multi-task objective, where the total loss is the sum of the losses from the entity span detection, relation detection, entity candidate prediction, and disambiguation. The individual loss function are given below

111 is used to denote the true label for all tasks here.

The loss function for the entity span detection model is the average binary cross entropy over the words of the input question, with

(8)

where is the label denoting if the -th word belongs to the entity span or not. For relation prediction, a weighted cross-entropy loss is used (where the weights are given by the relative ratio of relations in the training set having the same class as the sample) and for entity prediction a vanilla cross-entropy loss , which depends on the parameters of all sub-models. Furthermore, an additional cross entropy loss function is used to train the gating function. Last but not least, we add an regularization term for soft-parameter sharing following Duong et al. (2015) resulting in a total loss given by

(9)

where, and are the hidden layer weights of the entity span detection and relation detection module. Given , all parameters of the model are jointly trained in an end-to-end manner.

6.2 Training details

We use the pre-processed data and word-embeddings provided by Mohammed et al. (2018) to train our models. To obtain KG embeddings, we train TransE Bordes et al. (2013) on the provided Freebase KG of 2 million entities. The size of the word embedding vectors is 300, and that of the KG embeddings is 50. The KG embeddings are kept fixed but the word embeddings are fine-tuned during optimization. For training the disambiguation gate we use a label of 1 if the correct entity label is present more than one times in the entity candidates, and label of 0 otherwise.

For training, a batch-size of 100 is used and the model is trained for 100 epochs. We save the model with the best validation accuracy of entity prediction and evaluate it on the test set. We apply Adam

Kingma and Ba (2014) for optimization with a learning rate of 1e-4. The size of the hidden layer of both the entity span and relation prediction Bi-LSTM is set to 300. The training process is conducted on a GPU with 3072 CUDA cores and a VRAM of 12GB.

7 Evaluation

In this section, we compare our model with other state-of-the-art entity linking and question-answering systems, both end-to-end and modular approaches. Resources are provided as supplementary materials to this paper that allow the reader to reproduce the final results reported in this section.

7.1 Entity-linking

We compare our end-to-end entity-linking accuracy with other systems whose results are published, on the test set. The results are summarized in Table 1. The number of candidates in the entity candidate set is varied from 100 to 300. The percentage of examples for which the correct entity candidate is present in the candidate set is reported in parenthesis. If the percentage is higher, model performance also increases. The model evaluated over wthe largest entity candidate set (i.e. = 300) gives the best performance, which is significantly better than the BiLSTM based model Mohammed et al. (2018) (13.60% additional accuracy) and the Attentive CNN model (5% additional accuracy). It must be noted that our model cannot be compared directly to the one from Mohammed et al. (2018)

because they don’t use any candidate information for entity-linking, they do it during final question-answering as a post-processing step. The BiLSTM based model in combination with a n-gram based entity matching and relation based re-ranking suggested by

Yu et al. (2017) is better than our proposed model by 0.40%.

Model Accuracy (% of Cand. Present)
BiLSTM
Mohammed et al. (2018)
65.00 (-)
Attentive CNN
Yin et al. (2016)
73.60 (-)
BiLSTM & Entity-reranking
Yu et al. (2017)
79.00 (-)
Proposed model (n=100) 77.80(92.07)
Proposed model (n=200) 78.35(94.34)
Proposed model (n=300) 78.60(95.49)
Table 1: Entity Linking Accuracy.

7.2 Question Answering

The final metric for simple QAKG is defined by the number of correct entities and relations predicted by a given model. We are comparing the performance of our system with that of both end-to-end methods and modular approaches in Table 2. The results show that the proposed architecture outperforms the state-of-the-art NN based model (GRU based) Lukovnikov et al. (2017) by 2.0 %, and shows a performance competitive to simple modular baseline approaches like Mohammed et al. (2018) and the KG embedding based approaches KEQA proposed by Huang et al. (2019). However, the best state-of-the-art approach on QAKG Yu et al. (2017) outperforms ours model by 5.50%. It should be noted here that although our entity-span detection and relation linking accuracy (82.01 %) is better than that of the model proposed by Mohammed et al. (2018), the final question answering performance is worse by 1.7 %. This can be explained by the fact that their approach builds on additional string-matching heuristics along with the scores from the different models to re-rank the predicted entities and relations.

Approach Model Accuracy(FB2M)
End-to-End NN Memory NN Bordes et al. (2015) 61.60
Attn. LSTM He and Golub (2016) 70.90
GRU based Lukovnikov et al. (2017) 71.20
Proposed model (n=100) 72.29
Proposed model (n=200) 72.84
Proposed model (n=300) 73.20
Modular BiLSTM & BiGRU Mohammed et al. (2018) 74.90
KEQA Huang et al. (2019) 75.40
CFO Dai et al. (2016) 75.70
CNN & Attn. CNN & BiLSTM-CRF Yin et al. (2016) 76.40
BiLSTM-CRF & BiLSTM Petrochuk and Zettlemoyer (2018) 78.10
BiLSTM & Entity-reranking Yu et al. (2017) 78.70
Table 2: Question Answering Accuracy.
Approach Entity-linking Accuracy
Removing from total loss 67.55
Removing gating mechanism
74.63
Removing soft-loss from total loss
78.17
Without re-ranking candidates 77.56
Our Best Model (n=300) 78.60
Table 3: Ablation Study

8 Discussion

8.1 Ablation Study

Finally, we do an ablation study where we remove some parts of the proposed model and observe the performance of entity linking for = 300. The results are in 3. As observed, the entity-linking accuracy from not training the relation linker are at par with Mohammed et al. (2018) in Table 1

. The gating mechanism adds 3.97 %, because doing only a mean from the entity and relation prediction similarity scores would add in extra information overhead for the candidate selection for wrongly classified relation. The proposed soft-loss aids in 0.43 % increase in entity-linking accuracy and the candidate re-ranking improves it by 1.04%.

8.2 Quantitative and Error Analysis

We do a quantitative analysis from the results of our best model with =300. Percentage of questions with soft-disambiguity is 21.1 % and with hard-ambiguity is 18.51 %. Our model is able to predict 84.81 % of correct entity candidates for soft-disambiguation cases, out of which 75.02 % of times the correct relation was identified and 9.78 % the model predicted the wrong relation but the correct candidate is picked using our proposed KG embeddings based method; which proves that our intuition for using KG embeddings for the final task can be beneficial. For hard-ambiguity cases, the model was able to predict the correct candidate with an accuracy of 35.66 % (1432 out of 4015 cases), out of which the model predicted wrong relations 4.4 % of cases. But, it should be noted that there are no explicit linguistic signals to solve hard-disambiguity, following previous works we are predicting these cases based solely on candidate importance.

The model is able to predict the correct candidate 97.70 % of the times for cases where no disambiguation is required. Out of the 440 such wrongly classified candidates, 165 cases are because the true entity and correct relation are not connected in the KG at 1-hop, 162 because the entity span detector was not able to predict the correct span and the rest for wrong prediction in the disambiguate gating mechanism.

In general, some cases where the entity-span detector has failed to identify the correct entity is in table 4. In some of these cases, there are more than 1 entity in the question. Hence, it is difficult for the entity span detector to detect the correct entity.

what ’s a rocket that has been flown
who is a swedish composer
what ’s the name of an environmental disaster in italy
which korean air flight was in an accident
Table 4: Span Detection Error. Green - correct span, blue - detected span.

For the final question-answering task, as mentioned previously, although the end-to-end accuracy for Mohammed et al. (2018) is better than ours’, but the task of question answering is particularly challenging in this case because we don’t use any scores from string matching based methods such as Levenshtein distance for entity linking as done as an additional post-processing step by Mohammed et al. (2018), especially in cases where the entity candidates and the entity mention in the question consists of out-of-vocabulary words. Also, for some cases, it is challenging to disambiguate between the predicted relations because there are no explicit linguistic signals available. To exemplify, let us consider the question what county is sandy balls near ?. The predicted relation relation for this question by our model is "fb:location.location.containedby" while the true relation in the dataset is "fb:travel.tourist_attraction.near_travel_destination".

9 Conclusion and Future Work

In this paper, we have proposed an end-to-end model for entity linking, leveraging KG embeddings along with word embeddings banking on relatively simple architectures for entity and relation detection. As reported, the proposed architecture performs better than other end-to-end models but modular architectures demonstrates better question answering performance. However, the purpose of this paper was to integrate KG and word embeddings in a single, end-to-end model for entity linking. Moreover, since the final prediction model is based on similarity scores, the final prediction (and gating) can be easily interpreted following equations 4, 5, 6 and 7.

Error analysis suggest that the model can gain from better entity span detection. As a future work we will experiment by integrating CRF-biLSTM for span-detection and also with more recent NLP models like BERT. The model will also improve with better relation linking and better handling of out-of-vocabulary words. We would also like to integrate more recent state-of-the-art KG embedding models Dettmers et al. (2018); Schlichtkrull et al. (2018), which can capture better relation semantics in the architecture as a future work.

References

  • M. Azmy, P. Shi, J. Lin, and I. Ilyas (2018) Farewell freebase: migrating the simplequestions dataset to dbpedia. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 2093–2103. Cited by: §2.
  • Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin (2003) A neural probabilistic language model. Journal of machine learning research 3 (Feb), pp. 1137–1155. Cited by: §3.3.
  • A. Bordes, N. Usunier, S. Chopra, and J. Weston (2015) Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Cited by: §1, §1, §2, §2, Table 2.
  • A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Eds.), pp. 2787–2795. External Links: Link Cited by: §3.2, §6.2.
  • R. Collobert and J. Weston (2008) A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pp. 160–167. Cited by: §3.3.
  • Z. Dai, L. Li, and W. Xu (2016) CFO: conditional focused neural question answering with large-scale knowledge bases. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 800–810. External Links: Document, Link Cited by: §2, Table 2.
  • T. Dettmers, M. Pasquale, S. Pontus, and S. Riedel (2018) Convolutional 2d knowledge graph embeddings. In

    Proceedings of the 32th AAAI Conference on Artificial Intelligence

    ,
    pp. 1811–1818. External Links: Link Cited by: §9.
  • D. Diefenbach, T. Tanon, K. Singh, and P. Maret (2017) Question answering benchmarks for wikidata. In ISWC 2017, Cited by: §2.
  • L. Duong, T. Cohn, S. Bird, and P. Cook (2015) Low resource dependency parsing: cross-lingual parameter sharing in a neural network parser. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Beijing, China, pp. 845–850. External Links: Link, Document Cited by: §6.1.
  • X. He and D. Golub (2016) Character-level question answering with attention. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1598–1607. External Links: Document, Link Cited by: §2, Table 2.
  • X. Huang, J. Zhang, D. Li, and P. Li (2019) Knowledge graph embedding based question answering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 105–113. Cited by: §2, §7.2, Table 2.
  • E. Jang, S. Gu, and B. Poole (2016) Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Cited by: §5.3.2.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §6.2.
  • J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, M. Morsey, P. Van Kleef, S. Auer, et al. (2015) DBpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web 6 (2), pp. 167–195. Cited by: §2.
  • D. Lukovnikov, A. Fischer, J. Lehmann, and S. Auer (2017) Neural network-based question answering over knowledge graphs on word and character level. In Proceedings of the 26th international conference on World Wide Web, pp. 1211–1220. Cited by: §2, §7.2, Table 2.
  • T. Mikolov, K. Chen, G. Corrado, and J. Dean (2013a) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Cited by: §3.3.
  • T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013b) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §3.3.
  • S. Mohammed, P. Shi, and J. Lin (2018) Strong baselines for simple question answering over knowledge graphs with and without neural networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 291–296. Cited by: §1, §2, §4.1, §5.1, §6.2, §7.1, §7.2, Table 1, Table 2, §8.1, §8.2.
  • J. Pennington, R. Socher, and C. Manning (2014) Glove: global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Cited by: §3.3.
  • M. Petrochuk and L. Zettlemoyer (2018) SimpleQuestions nearly solved: a new upperbound and baseline approach. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 554–558. External Links: Link Cited by: §1, §2, §4.1, Table 2.
  • M. Schlichtkrull, T. N. Kipf, P. Bloem, R. Van Den Berg, I. Titov, and M. Welling (2018) Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pp. 593–607. Cited by: §9.
  • C. Unger, A. N. Ngomo, and E. Cabrio (2016) 6th open challenge on question answering over linked data (qald-6). In Semantic Web Evaluation Challenge, pp. 171–177. Cited by: §2.
  • R. Usbeck, A. N. Ngomo, F. Conrads, M. Röder, and G. Napolitano (2018) 8th challenge on question answering over linked data (qald-8). language 7, pp. 1. Cited by: §2.
  • R. Usbeck, A. N. Ngomo, B. Haarmann, A. Krithara, M. Röder, and G. Napolitano (2017) 7th open challenge on question answering over linked data (qald-7). In Semantic Web Evaluation Challenge, pp. 59–69. Cited by: §2.
  • D. Vrandečić and M. Krötzsch (2014) Wikidata: a free collaborative knowledge base. Cited by: §2.
  • K. Xu, S. Reddy, Y. Feng, S. Huang, and D. Zhao (2016) Question answering on freebase via relation extraction and textual evidence. pp. 800–810. Cited by: §1.
  • W. Yin, M. Yu, B. Xiang, B. Zhou, and H. Schütze (2016)

    Simple question answering by attentive convolutional neural network

    .
    In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 1746–1756. External Links: Link Cited by: §1, §2, Table 1, Table 2.
  • M. Yu, W. Yin, K. S. Hasan, C. dos Santos, B. Xiang, and B. Zhou (2017) Improved neural relation detection for knowledge base question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 571–581. External Links: Document, Link Cited by: §2, §2, §7.1, §7.2, Table 1, Table 2.
  • P. Zhou, W. Shi, J. Tian, Z. Qi, B. Li, H. Hao, and B. Xu (2016)

    Attention-based bidirectional long short-term memory networks for relation classification

    .
    In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Berlin, Germany, pp. 207–212. External Links: Link, Document Cited by: §5.2.