DeepAI
Log In Sign Up

Message Passing for Query Answering over Knowledge Graphs

02/06/2020
by   Daniel Daza, et al.
Vrije Universiteit Amsterdam
0

Logic-based systems for query answering over knowledge graphs return only answers that rely on information explicitly represented in the graph. To improve recall, recent works have proposed the use of embeddings to predict additional information like missing links, or labels. These embeddings enable scoring entities in the graph as the answer a query, without being fully dependent on the graph structure. In its simplest case, answering a query in such a setting requires predicting a link between two entities. However, link prediction is not sufficient to address complex queries that involve multiple entities and variables. To solve this task, we propose to apply a message passing mechanism to a graph representation of the query, where nodes correspond to variables and entities. This results in an embedding of the query, such that answering entities are close to it in the embedding space. The general formulation of our method allows it to encode a more diverse set of query types in comparison to previous work. We evaluate our method by answering queries that rely on edges not seen during training, obtaining competitive performance. In contrast with previous work, we show that our method can generalize from training for the single-hop, link prediction task, to answering queries with more complex structures. A qualitative analysis reveals that the learned embeddings successfully capture the notion of different entity types.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

04/27/2022

Query2Particles: Knowledge Graph Reasoning with Particle Embeddings

Answering complex logical queries on incomplete knowledge graphs (KGs) w...
04/06/2020

Answering Complex Queries in Knowledge Graphs with Bidirectional Sequence Encoders

Representation learning for knowledge graphs (KGs) has focused on the pr...
02/22/2021

Approximate Knowledge Graph Query Answering: From Ranking to Binary Classification

Large, heterogeneous datasets are characterized by missing or even erron...
10/13/2022

Inductive Logical Query Answering in Knowledge Graphs

Formulating and answering logical queries is a standard communication in...
01/21/2023

Logical Message Passing Networks with One-hop Inference on Atomic Formulas

Complex Query Answering (CQA) over Knowledge Graphs (KGs) has attracted ...
02/28/2021

Logic Embeddings for Complex Query Answering

Answering logical queries over incomplete knowledge bases is challenging...
07/04/2018

Feature-based reformulation of entities in triple pattern queries

Knowledge graphs encode uniquely identifiable entities to other entities...

1 Introduction

Figure 1: We propose an architecture to encode the query graph, where we train embeddings for entities in the query, and generic type embeddings for variables. An R-GCN propagates information across the graph according to the relations that are present, to obtain node representations that depend on the structure of the query. A final aggregation function yields the query embedding.

Graphs are data structures suitable for a wide variety of applications, including knowledge representation. They are useful for encoding information from different domains by representing discrete entities and relations of different types between them, forming a Knowledge Graph (KG). A common way to answer a question using a KG is to pose it as a structured query (for example, using the SPARQL query language [9]). The query is then answered via logical inference, using the information present in the graph. However, knowledge graphs are usually incomplete, either due to the construction process, or their dynamic nature. This means that there will be cases where these systems return no answer for a query. To circumvent this problem, one could use query relaxation techniques, that analyze the query and modify it, to reduce the constraints that must be met by an entity to be considered an answer [4, 6].

We address this problem by answering a query without being limited by an incomplete KG, while avoiding any direct modification of the query. We follow recent works that propose to map the query and all entities in the KG to an embedding space [7, 17, 11]. There, we can compute similarity scores to produce a ranked list of answers, even if the graph had some information missing to answer the original query.

In this work, we propose Message Passing Query Embedding (MPQE), motivated by the observation that queries over a KG can be represented by small graphs, where nodes correspond to entities (constants) and variables in the query. We employ a Graph Neural Network (GNN) to perform message passing on the query graph, and an aggregation function to combine all the messages in a single vector, which acts as a representation of the query in the embedding space. Our architecture is illustrated in fig.

1. By training on the task of query answering, our method learns jointly embeddings of entities and variables. Our contributions can be summarized as follows:

  • We propose a novel method to embed queries over knowledge graphs, that addresses limitations of previous works in terms of computational complexity, and the diversity of query structures that it admits.

  • We introduce three datasets for the evaluation of complex query answering over knowledge graphs from thousands to millions of entities and edges.

  • We carry out multiple experiments to evaluate the performance of methods on query answering. Our results show that our architecture is competitive with the state-of-the-art method for query embedding, when training on multiple query structures. We demonstrate the superior generalization properties of our method by training for link prediction only. The results show that MPQE generalizes to much more complex queries not seen during training.

  • We conduct a qualitative analysis of the entity embeddings produced by query embedding methods, and show that MPQE learns a more structured embedding space.

2 Problem Definition

We define a Knowledge Graph (KG) as a tuple , where is a set of nodes representing entities, and a set of typed edges between the nodes. A function assigns a type to every node, where is a set of node types. Each edge corresponds to a relation between two nodes and , that we denote by , where is a relation type.

Given a KG, we can pose queries that seek for an entity satisfying certain conditions. One way to define these conditions is to use a conjunctive form, that consists of a conjunction of binary predicates where the arguments are entities, or query variables. The condition specifies constraints on the relations between entities and variables.

To illustrate this, consider a KG of an academic institution where researchers work on topics, and topics are related to projects. We can formulate the following query: “select all projects , such that topic is related to , and both alice and bob work on .” This query asks for entities that satisfy the following condition:

(1)

In general, a query is defined by a condition on a target variable as follows:

(2)

where , and and are either entities in , or query variables in . An entity is therefore considered an answer if it satisfies the condition defined by the query.

We address the problem of returning a list of entities that satisfy the query, even when the binary predicates would require edges missing in the KG. To do so, we assign an embedding to every entity . We additionally define an embedding method for the query, that maps the complete query to a vector

. We can then score each entity in the KG as an answer to the query, using the cosine similarity between

and the entity embedding :

(3)

The problem thus requires a specification of a query embedding mechanism that captures the properties of the entities relevant to the query.

3 Message Passing Query Embedding

As noted in previous work [7, 17], some queries in conjunctive form can be represented as a Directed Acyclic Graph (DAG). In this graph, the leaf nodes correspond to entities in the query, the root to the variable to be retrieved, and any intermediate nodes to other variables in the query. In the SPARQL query language, these graphs are Basic Graph Patterns (BGP) [9] which, in contrast with the former DAG, are not constrained to having entities in the leaves only. In this work, we are concerned with the latter, more general case.

Given a query of the form given in eq. 2, we define the query graph as the tuple . Here, is the union of the entity nodes , and the type nodes , where contains a node for each entity present in the binary predicates, and a node for each variable.To construct , we add one edge for each binary predicate in the query.

The representation of the query as a graph allows us to combine the use of entity embeddings with recent advances in neural networks for graph-structured data [19]. Our method, which we call Message Passing Query Embedding (MPQE) has three steps: initialization of the nodes, message passing, and aggregation of the state of the nodes into one embedding for the query.

3.1 Model Definition

The first step is initializing the nodes of the query the graph. We do this by assigning an initial feature vector to every node in the query graph, given by a one-hot representation with elements if is an entity node, or elements if it is a variable node. This representation is used to index embedding matrices that project the nodes into a low-dimensional space. We define a matrix of entity embeddings , where is the dimension of the embedding space, and type embeddings with a matrix . The node embedding function is defined as follows:

(4)

In words, this means that each entity has its own embedding and that each variable gets initialized by a representation for its type. Note that we overload the definition of to also provide types for the variable nodes of queries.

Having defined features for every node in the query graph, we proceed to apply steps of message passing with a GNN. In particular, we employ a Relational Graph Convolutional Network (R-GCN) [16], which updates the features of a node taking into account its neighbors and the type of the relations involved. The representation for node at step for the R-GCN is defined as follows:

(5)

where is a non-linearity, is the set of neighbors of node through relation type , and and are parameters of the model.

After applications of an R-GCN layer, the representations of all nodes in the query graph can be combined into a single vector that acts as the embedding of the query, by means of an aggregation function :

(6)

we continue by defining several options for this function.

Let denote the diameter of the query graph (the longest shortest path between two nodes in the graph). We propose an adaptive query embedding method, by noting that at most message passing steps are required to propagate messages from all nodes, to the target node. Given a query graph, the method performs steps of message passing, and it then employs a Target Message (TM) aggregation function, which simply selects the representation of the target node:

(7)

Alternative aggregation functions can leverage the representations of other nodes in the query graph. Simple permutation-invariant functions include the sum and maximum, but we also consider functions with additional parameters [8, 22]

. We first consider an aggregation function that passes all representations through a Multi-Layer Perceptron (MLP) and then sums the results:

(8)

Previous works have highlighted the importance of leveraging features from different layers of a neural network [8, 20], which motivates an aggregation function that concatenates node representation from hidden layers of the R-GCN. We denote this function as CMLP:

(9)

The parameters of MPQE consist of entity and type embeddings, together with the parameters of the R-GCN used during the message passing procedure, and any additional parameters included in the aggregation function. Following previous work on query embedding [7, 11]

, we optimize MPQE using gradient descent on a contrastive loss function, where given a query

and its embedding , a positive sample corresponds to an entity in the knowledge graph that answers the query, and a negative sample is an entity sampled at random, that is not an answer to the query but has the correct type. We minimize the following margin loss function:

(10)

where is the embedding of the entity (according to eq. 4). The optimization of this loss function encourages higher scores for positive samples than for negative ones, as it penalizes the model whenever the margin between the two scores is lower than 1.

4 Related Work

Multiple approaches for machine learning on graphs consider embedding the graph into a vector space 

[2, 18, 21]. The applicability of these methods for answering complex queries is limited. For each link that needs to be predicted to answer a query, link prediction methods need to consider all possible entities, which is exponential in the size of the query. In comparison, our method is based on an architecture that directly encodes the query into an embedding, which is optimized to be similar to the embedding of correct entities. This provides our method with a linear complexity in the size of the graph.

To avoid the link prediction problem at each step when traversing a KG, other works seek to perform prediction across longer paths [3, 12], which restricts the use of these methods to chain-like queries. This contrasts with our work, where we are interested in answering queries of a more arbitrary shape.

More recent works have also addressed the problem of obtaining a vector representation of a query, which is then used to obtain approximate answers, while still leveraging the properties of embedding methods such as TransE. This is achieved by partitioning the query graph in different subgraphs, so that candidate answers can be provided for each of them [23]. In [17], the authors pre-train embeddings using an algorithm inspired by TransE. In order to combine the embeddings in a meaningful way for the task of query answering, the authors propose a set of rules to aggregate the embeddings, by following the structure of the query graph. Since their method is related to probabilistic models of entities in context (such as node2vec [13]), the embeddings are dependent on how the context is selected, and the effect of this aspect on query answering is not clear. Our method differs significantly from this approach. Instead of relying on a separate pre-training step, we learn with an objective that optimizes entity embeddings for the task of query answering directly.

inlineinlinetodo: inlineMC There is one more paragraphs commented out on other embeddnig models, that could be re-enabled for the finalinlineinlinetodo: inlineMC Rescal was not directly introduced, but perhaps distmult is sufficient to represent the class.

The most related approaches to our work consist of recently proposed methods for encoding queries directly in the embedding space [7, 11], which work by applying a sequence of projection and intersection operators that follow the structure of the query graph. These methods are constrained to queries that in graph form correspond to Directed Acyclic Graphs (DAG) where entities can only be present at the leaves. Furthermore, the use of projection and intersection mechanisms requires these models to be trained with multiple query structures that comprise both chains and intersections. Our method has a more general formulation that enables it to i) encode a general set of query graphs, without constraints on the location of entities in the query, and ii) learn from link prediction training alone.

5 Experiments

AIFB MUTAG AM Bio
Entities 2,601 22,372 372,584 162,622
Entity types 6 4 5 5
Relations 39,436 81,332 1,193,402 8,045,726
Relation types 49 8 19 56
Table 1: Statistics of the knowledge graphs that we use for training and evaluation.

We evaluate the performance of MPQE in query answering over knowledge graphs, by considering 7 different query structures (see fig. 2). All the code to reproduce our experiments is available online 111https://github.com/dfdazac/mpqe.

Datasets

We make use of publicly available knowledge graphs that have been used in the literature of graph representation learning [14, 15, 16] and query answering [7, 11]:

  • AIFB: is a KG of academic institution, where entities are persons, organizations, projects, publications, and topics.

  • MUTAG: a KG of carcinogenic molecules, where entities are atoms, bonds, compounds, and structures.

  • AM: this KG contains the relations between different artifacts in the Amsterdam Museum, including locations, makers, documentation, and agents, among others.

  • Bio: a dataset of a biological interaction network containing entities of type drug, disease, protein, side effect, and biological processes.

A list of their statistics can be found in table 1.

Figure 2: Query structures that we consider for the evaluation of methods on query answering. Green nodes correspond to entities, and the rest are variables in the query, with blue nodes representing the target of the query.
AIFB MUTAG AM Bio
AUC APR AUC APR AUC APR AUC APR
Method Base All Base All Base All Base All Base All Base All Base All Base All
GQE-TransE 85.1 83.1 87.9 86.7 94.5 78.8 93.9 81.0 92.4 80.9 92.1 82.3 94.6 87.4 95.4 88.9
GQE-DistMult 85.1 83.8 86.6 86.0 81.3 80.6 81.8 81.1 83.9 82.9 84.8 83.2 97.0 90.0 96.5 90.3
GQE-Bilinear 86.0 83.4 84.0 83.3 94.0 78.5 94.0 79.7 91.0 80.7 91.5 84.4 98.1 90.5 97.4 90.8
RGCN-TM 89.3 84.9 90.0 87.4 91.2 76.7 90.9 77.6 92.0 84.2 92.4 86.3 98.2 88.8 97.7 89.8
RGCN-sum 88.1 84.7 88.7 86.8 92.4 74.6 90.9 73.1 90.1 80.9 91.0 83.6 98.1 90.0 97.3 90.5
RGCN-max 87.6 83.4 88.1 85.9 91.4 74.9 89.4 72.7 90.3 80.9 90.6 82.5 97.3 88.3 96.4 88.7
RGCN-MLP 89.2 85.8 90.7 87.3 90.9 73.7 90.9 74.8 92.0 82.9 91.7 84.1 97.8 89.9 97.2 90.0
RGCN-CMLP 90.0 86.3 91.6 89.1 92.0 74.3 91.2 72.5 91.9 82.5 92.3 85.5 98.0 90.1 97.3 90.2
Table 2: Results on query answering averaged across different query structures. We show results when evaluating on regular negative samples (Base) and when including hard negative samples (All). The highlighted numbers denote the best variant within each group of GQE and MPQE models.

Query generation

To obtain query graphs, we sample subgraphs from the KG, following the structures shown in Figure 2.222These are chosen such that our work is comparable to related work [7, 11]. Each sampled subgraph specifies the entities and the types of variables in the query (including the target node), together with the correct answer to the query, which is used as a positive sample. For each query we also obtain a negative sample, and in the case of query graphs with intersections, a hard negative sample. These are entities that would be a correct answer to the query, if the conjunction represented by the intersection is relaxed to a disjunction.

Evaluation

We evaluate the effectiveness of our method when answering queries that require information in the graph not observed during training. In particular, given a KG, we start by removing 10% of its edges. Using this incomplete graph, we extract 1 million subgraphs, containing all the query structures outlined previously. These queries form the training set, which we use to optimize eq. 10.

We then restore the removed edges, and extract 11,000 additional subgraphs of all structures, ensuring that they all rely in at least one of the edges that was removed to create the training set. We split this set of query graphs into two disjoint sets, containing 1,000 queries for validation, and 10,000 for testing. We use the validation set to perform early stopping during training, and we report results on the test set.

For evaluation, we use the embedding of a query to compute a score against its correct answer and a negative sample, using eq. 3. The scores obtained for a set of queries is used to calculate the area under the ROC curve (AUC). Furthermore, we compute scores against at most 1,000 negative samples, that we use to compute the Average Percentile Rank (APR), so that in the ideal case, a correct entity should have a percentile rank of 100. These metrics are thus a proxy of the retrieval quality of our method for query answering.

Models

We evaluate the performance of MPQE under different aggregation functions. We initialize all embedding matrices randomly, although they could be obtained from a pretraining step with methods such as TransE [2]

. With the exception of the TM aggregation function (where the number of message passing steps is given by the query diameter), we use 2 R-GCN layers. For aggregation functions with MLPs, we use two fully-connected layers, and in all cases we use ReLU for the nonlinearities.

As a baseline we include the Graph Query Embedding (GQE) method by Hamilton et al. (2018) [7] with the default settings reported by the authors in their implementation333https://github.com/williamleif/graphqembed. We test the three variants that they propose, namely TransE, DistMult, and Bilinear.

We minimize eq. 10

using the Adam optimizer with a learning rate of 0.01, and use an embedding dimension of 128. We train the models on 1-chain queries until convergence, and then on the full set of query structures until convergence, as we found that this procedure helps to speed up convergence. For our implementation we use PyTorch and the PyTorch Geometric library

[5].

5.1 Results

Query answering

The results for the query answering task are shown in table 2. We show results for two cases: in the Base case, we evaluate the performance across all query structures, with regular positive and negative samples. In the All case, we include hard negative samples. We observe that MPQE obtains competitive performance in comparison with GQE across different datasets. We also note that the performance of our method is consistent when considering a rank against multiple negative samples, as shown by the APR results.

As expected, performance decreases when considering hard negative samples, with both methods exhibiting a similar reduction. The largest difference occurs in the MUTAG dataset, which we identified as the dataset with the less diverse set of relations, with MPQE resulting in lower performance. In spite of this discrepancy, the difference in the averages for the MUTAG dataset is not significant (according to a Wilcoxon signed-rank test) between GQE-DistMult and MPQE-TM (the best variants): while GQE-DistMult handles hard negative samples well (which occur only on queries with intersections), MPQE-TM has better performance on regular samples, across all query structures.

Generalization

While the previous experiments show that our method is competitive on the task in comparison with GQE, we argue that our method has a more general formulation. To examine the generalization properties of MPQE, we evaluate the methods when training on 1-chain queries only. In this scenario, the models are optimized to perform link prediction, but we carry out the evaluation using the complete set of query structures.

GQE is designed to work with an intersection operator that can only be optimized if there are queries with intersections in the training set. Therefore, when training it on 1-chain queries only, GQE cannot provide an answer better than random for queries with intersections. For our method, there is no such a limitation. We thus consider two evaluations modes when training on 1-chain queries: evaluating on queries with chain structures only, and evaluating on the complete set of query structures (where GQE is not applicable). These modes are denoted as “ch” and “all”, respectively, in table 3. The results of MPQE are competitive when evaluating on queries with chains only, and crucially, it also generalizes well to six query structures not seen during training. This surprising results shows that message passing is an effective mechanism that does not necessarily require training on many diverse query structures to generalize well, as is the case for GQE.

Figure 3: Query answering performance (AUC) as a function of the number of message passing steps (implemented by layers of an R-GCN), evaluated across different query types. Dark circles corresponds to the diameter of the corresponding query.
AIFB MUTAG AM Bio
Method ch all ch all ch all ch all
GQE-TransE 74.0 89.4 85.8 85.5
GQE-DistMult 72.8 85.4 82.4 95.9
GQE-Bilinear 72.7 89.1 85.9 85.8
RGCN-TM 77.0 75.5 86.8 77.2 85.0 81.6 96.4 83.9
RGCN-sum 69.8 69.6 82.8 74.0 52.5 53.9 92.4 80.0
RGCN-max 74.1 71.9 77.1 71.6 51.2 53.0 92.0 79.9
RGCN-MLP 69.1 68.0 76.0 70.0 51.3 53.8 90.7 78.7
RGCN-CMLP 69.7 69.1 84.6 74.2 51.5 53.8 89.8 78.3
Table 3: Generalization results on query answering (AUC) averaged across different query structures, when training for link prediction. The results show the performance on a test set of queries with chains only (ch), and the complete set of queries with chains and intersections (all). Dashes indicates not better than random results.

Message passing performance

An interesting observation from our experiments is that the message passing mechanism alone is sufficient to provide good performance for query answering, as we can see from the results for the MPQE-TM architecture. In this model, we perform a number of steps of message passing equal to the diameter of the query, and take as query embedding the resulting feature vector at the target node. Intuitively, this allows MPQE-TM to adapt to the structure of a query so that after message passing, all information from the entity and variable nodes has reached the target node. To confirm this intuition, we evaluate the performance of MPQE as a function of the number of message passing steps, ranging from 1 to 4. The results are shown in Figure 3, for all the query structures that we have considered. We highlight the points that correspond to the diameter of the query, and we note that the results align with our intuition about the message passing mechanism. When the number of steps matches the diameter, there is a significant increase in performance, and further steps have little effect. This supports the superior generalization observed in previous experiments, in comparison with GQE, and other MPQE architectures where the number of R-GCN layers was fixed.

Visualization

In order to assess the properties of the embedding space induced by MPQE, we sample 200 entities for each type in the Amsterdam Museum dataset, and visualize them using T-SNE [10]. The results are shown in fig. 4 for GQE-Bilinear and MPQE-TM. We observe that the embedding space for MPQE is clearly structured. The embeddings form clusters in the space on regions of the same type. This is in stark contrast with the embeddings of GQE, where we do not observe a clear structure, apart from a single cluster that is not completely concentrated. These results enrich the generalization experiments, where training for link prediction resulted in useful embeddings for more complex queries. With such a structured space, MPQE can compose messages for paths of length 1 to move across the space and obtain an embedding for queries that require more message passing steps.

Figure 4: Visualization of the entity embeddings learned by GQE-Bilinear (left) and MPQE-TM (right) after dimensionality reduction. Each color represents an entity type.

6 Conclusion

We have presented MPQE, a neural architecture to encode complex queries on knowledge graphs, that jointly learns entity and type embeddings and a straightforward message passing architecture to obtain a query embedding. Our experiments show that message passing across the query graph is a powerful mechanism for query answering, and that it generalizes to multiple query structures even when only trained for single hop link prediction.

The qualitative results show that MPQE learns a well-structured embedding space. This result motivates future research on the application of the learned embeddings to other tasks related to KGs, such as clustering, and node and graph classification. Under this new light, MPQE can be seen as an unsupervised representation learning method for KGs, since all the training data it requires is generated from the graph.

We showed that the general formulation of our model allowed it to exhibit greater generalization, but we also note that there are applications of this architecture that constitute interesting directions for future work. By being able to encode queries independent of the position of entities and variables, we could encode queries with additional information, that could be used to condition the answers on a given context. Such an application would be useful in information retrieval and recommender systems.

Our method presents limitations when evaluating on hard negative samples. Our experiments showed a slight increase in performance when increasing the number of message passing steps, but the end effect was not significant. Further modifications could include improving the message passing procedure, by including attention or gating functions that would enable a better conditioning of the query embedding given the structure of the query graph.

References

  • [1] Y. Bengio and Y. LeCun (Eds.) (2015) 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7-9, 2015, conference track proceedings. External Links: Link Cited by: 21.
  • [2] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pp. 2787–2795. Cited by: §4, §5.
  • [3] R. Das, A. Neelakantan, D. Belanger, and A. McCallum (2017-04)

    Chains of reasoning over entities, relations, and text using recurrent neural networks

    .
    In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Valencia, Spain. Cited by: §4.
  • [4] S. Elbassuoni, M. Ramanath, and G. Weikum (2011) Query relaxation for entity-relationship search. In Extended Semantic Web Conference, pp. 62–76. Cited by: §1.
  • [5] M. Fey and J. E. Lenssen (2019) Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, Cited by: §5.
  • [6] G. Fokou, S. Jean, A. Hadjali, and M. Baron (2017) Handling failing rdf queries: from diagnosis to relaxation. Knowledge and Information Systems 50 (1), pp. 167–195. Cited by: §1.
  • [7] W. L. Hamilton, P. Bajaj, M. Zitnik, D. Jurafsky, and J. Leskovec (2018) Embedding logical queries on knowledge graphs. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada., pp. 2030–2041. Cited by: §1, §3.1, §3, §4, §5, §5, footnote 2.
  • [8] W. L. Hamilton, Z. Ying, and J. Leskovec (2017) Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 1025–1035. Cited by: §3.1, §3.1.
  • [9] S. Harris, A. Seaborne, and E. Prud’hommeaux (2013) SPARQL 1.1 query language. W3C recommendation 21 (10), pp. 778. Cited by: §1, §3.
  • [10] L. v. d. Maaten and G. Hinton (2008) Visualizing data using t-sne. Journal of machine learning research 9 (Nov), pp. 2579–2605. Cited by: §5.1.
  • [11] G. Mai, K. Janowicz, B. Yan, R. Zhu, L. Cai, and N. Lao (2019) Contextual graph attention for answering logical queries over incomplete knowledge graphs. In Proceedings of K-CAP 2019, Nov. 19 - 21,2019, Marina del Rey, CA, USA., Cited by: §1, §3.1, §4, §5, footnote 2.
  • [12] A. Neelakantan, B. Roth, and A. McCallum (2015-07) Compositional vector space models for knowledge base completion. In

    Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

    ,
    Beijing, China, pp. 156–166. External Links: Link, Document Cited by: §4.
  • [13] B. Perozzi, R. Al-Rfou, and S. Skiena (2014) Deepwalk: online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701–710. Cited by: §4.
  • [14] P. Ristoski, G. K. D. De Vries, and H. Paulheim (2016) A collection of benchmark datasets for systematic evaluations of machine learning on the semantic web. In International Semantic Web Conference, pp. 186–194. Cited by: §5.
  • [15] P. Ristoski and H. Paulheim (2016) Rdf2vec: rdf graph embeddings for data mining. In International Semantic Web Conference, pp. 498–514. Cited by: §5.
  • [16] M. S. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling (2018) Modeling relational data with graph convolutional networks. In The Semantic Web - 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings, pp. 593–607. Cited by: §3.1, §5.
  • [17] M. Wang, R. Wang, J. Liu, Y. Chen, L. Zhang, and G. Qi (2018) Towards empty answers in sparql: approximating querying with rdf embedding. In International Semantic Web Conference, pp. 513–529. Cited by: §1, §3, §4.
  • [18] Z. Wang, J. Zhang, J. Feng, and Z. Chen (2014)

    Knowledge graph embedding by translating on hyperplanes

    .
    In

    Twenty-Eighth AAAI conference on artificial intelligence

    ,
    Cited by: §4.
  • [19] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu (2019) A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596. Cited by: §3.
  • [20] K. Xu, C. Li, Y. Tian, T. Sonobe, K. Kawarabayashi, and S. Jegelka (2018) Representation learning on graphs with jumping knowledge networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp. 5449–5458. Cited by: §3.1.
  • [21] B. Yang, W. Yih, X. He, J. Gao, and L. Deng (2015) Embedding entities and relations for learning and inference in knowledge bases. See 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7-9, 2015, conference track proceedings, Bengio and LeCun, External Links: Link Cited by: §4.
  • [22] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Póczos, R. R. Salakhutdinov, and A. J. Smola (2017) Deep sets. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 3394–3404. Cited by: §3.1.
  • [23] L. Zhang, X. Zhang, and Z. Feng (2018) TrQuery: an embedding-based framework for recommending sparql queries. In 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), Cited by: §4.