Interaction Embeddings for Prediction and Explanation in Knowledge Graphs

03/12/2019 ∙ by Wen Zhang, et al. ∙ Universität Zürich Zhejiang University 32

Knowledge graph embedding aims to learn distributed representations for entities and relations, and is proven to be effective in many applications. Crossover interactions --- bi-directional effects between entities and relations --- help select related information when predicting a new triple, but haven't been formally discussed before. In this paper, we propose CrossE, a novel knowledge graph embedding which explicitly simulates crossover interactions. It not only learns one general embedding for each entity and relation as most previous methods do, but also generates multiple triple specific embeddings for both of them, named interaction embeddings. We evaluate embeddings on typical link prediction tasks and find that CrossE achieves state-of-the-art results on complex and more challenging datasets. Furthermore, we evaluate embeddings from a new perspective --- giving explanations for predicted triples, which is important for real applications. In this work, an explanation for a triple is regarded as a reliable closed-path between the head and the tail entity. Compared to other baselines, we show experimentally that CrossE, benefiting from interaction embeddings, is more capable of generating reliable explanations to support its predictions.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Knowledge graphs (KGs) like Yago (Suchanek et al., 2007), WordNet (Miller, 1995), and Freebase (Bollacker et al., 2008) have numerous facts in the form of (head entity, relation, tail entity), or ( ) in short. They are useful resources for many AI tasks such as web search (Szumlanski and Gomez, 2010) and question answering (Yin et al., 2016).

Knowledge graph embedding (KGE) learns distributed representations (Hinton et al., 1986) for entities and relations, called entity embeddings and relation embeddings

. The embeddings are meant to preserve the information in a KG, and are represented as low-dimensional dense vectors or matrices in continuous vector spaces. Many KGEs, such as tensor factorization based RESCAL 

(Nickel et al., 2011), translation-based TransE (Bordes et al., 2013), neural tensor network NTN (Socher et al., 2013) and linear mapping method DistMult (Yang et al., 2015), have been proposed and are proven to be effective in many applications like knowledge graph completion, question answering and relation extraction.

Despite their success in modeling KGs, none of existing KGEs has formally discussed crossover interactions, bi-directional effects between entities and relations including interactions from relations to entities and interactions from entities to relations. Crossover interactions are quite common and helpful for related information selection, related information selection is necessary when predicting new triples because there are various information about each entity and relation in KGs.

Figure 1. A hypothetical knowledge graph. Nodes and edges represent entities and relations. Solid lines represent existing triples and dashed lines represent triples to be predicted.

We explain the notion of crossover interactions with a running example in Figure 1. During the prediction of (X, isFatherOf, ? ), there are six triples about entity X, but only four of them — (X, hasWife, Z), (X, fatherIs, Y), (Y, isFatherOf, X), (S, hasChild, X) — are related to this prediction, because they describe family relationships and will contribute to infer the father-child relationship. The other two triples describing the career relationship of X do not provide valuable information for this task. In this way, the relation isFatherOf affects the information of entities to be selected for inference. We refer this as interaction from relations to entities.

In Figure 1, there are two inference paths about the relation isFatherOf, but only one of them — () — is available for predicting (X, isFatherOf, ?). In this way, the information of entity X affects the inference path to be chosen for inference. We refer this as interactions from entities to relations.

Considering crossover interactions in KGE, the embeddings for both entities and relations in specific triples should capture their interactions, and be different when involving different triples. However, most previous methods like TransE (Bordes et al., 2013) learn a general embedding, which is assumed to preserve all the information for each entity and relation. They ignore interactions for both entities and relations. Some methods like TransH (Wang et al., 2014) and TransG (Xiao et al., 2016) learn either multiple entity or relation embeddings but not both. They ignore that crossover interactions are bi-directional and affect both entities and relations at the same time.

In this paper, we propose CrossE, a novel KGE which explicitly simulates crossover interactions. It not only learns one general embedding for each entity and relation, but also generates multiple triple specific embeddings, called interaction embeddings, for both of them. The interaction embeddings are generated via a relation specific interaction matrix. Given an , there are mainly four steps in CrossE: 1) generate interaction embeddings for head entity ; 2) generate interaction embeddings for relation ; 3) combine interaction embeddings and together; 4) compare the similarity of combined embedding with tail entity embedding t.

We evaluate embeddings on canonical link prediction tasks. The experiment results show that CrossE achieves state-of-the-art results on complex and more challenging datasets and exhibits the effectiveness of modeling crossover interactions in a KGE.

Furthermore, we also propose an additional evaluation scheme for KGEs from the perspective of explaining their predictions. Link prediction tasks only evaluate the accuracy of KGEs at predicting missing triples, while in real applications, explanations for predictions are valuable, as they will likely improve the reliability of predicted results. To the best of our knowledge, this is the first work to address both link prediction and its explanation of KGE.

The process of generating explanations for one triple is modeled as searching for reliable paths from to and similar structures to support path explanations. We evaluate the quality of explanations based on Recall and Average Support. Recall reflects the coverage of triples that a KGE can generate explanations for and Average Support reflects the reliability of the explanation. Our evaluation on explanations show that CrossE, benefiting from interaction embeddings, is more capable of giving reliable explanations than other methods, including TransE (Bordes et al., 2013) and ANALOGY (Liu et al., 2017).

In summary, our contributions in this paper are the following:

  • [leftmargin=*, wide]

  • We propose CrossE, a new KGE which models crossover interactions of entities and relations by learning an interaction matrix.

  • We evaluate CrossE compared with various other KGEs on link prediction tasks with three benchmark datasets, and show that CrossE achieves state-of-the-art results on complex and more challenging datasets with a modest parameter size.

  • We propose a new evaluation scheme for embeddings — searching explanations for predictions, and show that CrossE is able to generate more reliable explanations than other methods. This suggests that interaction embeddings are better at capturing similarities between entities and relations in different contexts of triples.

This paper is organized as follows. We review the literature in Section 2. We describe our model in Section 3 and explanation scheme in Section 4. We present the experimental results in Section 5 before concluding in Section 6.

2. Related work

Knowledge graph embedding (KGE) aims to embed a knowledge graph into a continuous vector space and learns dense low dimensional representations for entities and relations. Various types of KGE methods have been proposed and the majority of them learn the relationship between entities using training triples in a knowledge graph. Some methods also utilize extra information, such as logical rules, external text corpus and hierarchical type information to improve the quality of embeddings. Since our work focuses on learning from triples in a knowledge graph without extra information, we mainly summarize those methods learning with triples and briefly summarize methods using extra information in the end.

Considering the requirement of multiple representations for entities and relations from crossover interactions, prior KGEs learning with triples can be classified into two classes: (1) methods learning a general embedding for each entity and relation, and (2) methods learning multiple representations for either of them.

KGEs with general embeddings. Existing embedding methods with general embeddings all represent entities as low-dimensional vectors and relations as operations that combine the representation of head entity and tail entity. Most methods are proposed with different assumptions in vector space and model the knowledge graph from different perspectives. The first translation-based method TransE (Bordes et al., 2013) regards relations as translations from head entities to tail entities and assumes that the relation-specific translation of head entity should be close to the tail entity vector. It represents each relation as a single vector. RESCAL (Nickel et al., 2011) regards the knowledge graph as a multi-hot tensor and learns the entity vector representation and relation matrix representation via collective tensor factorization. HOLE (Nickel et al., 2016b), which is a compositional vector space model, utilizes interactions between different dimensions of embedding vectors and employs circular correlation to create compositional representations. RDF2Vec (Ristoski and Paulheim, 2016) uses graph walks and Weisfeiler-Lehman subtree RDF kernels to generate entity sequence and regards the entity sequence as sequence of words in natural language, then follows word2vec to generate embeddings for entities but not relations. NTN (Socher et al., 2013) represents each relation as a bilinear tensor operator followed by a linear matrix operator. ProjE (Shi and Weninger, 2017)

uses a simple but effective shared variable neural network. DistMult 

(Yang et al., 2015) learns embeddings from a bilinear objective where each relation is represented as a linear mapping matrix from head entities to tail entities. It successfully captures the compositional semantics of relations. ComplEx (Trouillon et al., 2016) makes use of complex valued embeddings to handle both symmetric and antisymmetric relations because the Hermitian dot product of real values is commutative while for complex values it is not commutative. ANALOGY (Liu et al., 2017) is proposed from the analogical inference point of view and based on the linear mapping assumption. It adds normality and commutatively constrains to matrices for relations so as to improve the capability of modeling analogical inference, and achieves the state-of-the-art results on link prediction task.

All these methods learn a general embedding for each entity and relation. They ignore crossover interactions between entities and relations when inferring a new triple in different scenarios.

KGEs with multiple embeddings. Some KGEs learn multiple embeddings for entities or relations under various considerations. Structured Embedding (SE) (Bordes et al., 2011) assumes that the head entity and the tail entity in one triple should be close to each other in some subspace that depends on the relation. It represents each relation with two different matrices to transfer head entities and tail entities. ORC (Zhang, 2017) focuses on the one-relation-circle and proposes to learn two different representations for each entity, one as a head entity and the other as a tail entity. TransH (Wang et al., 2014)

notices that TransE has trouble dealing with 1-N, N-1, and N-N relations. It learns a specific hyperplane for each relation and represents the relation as a vector close to its hyperplane. Entities are projected onto the relation hyperplane when involving a triple with this relation. TransR 

(Lin et al., 2015b) considers that various relations focus on different aspects of entities. It represents different aspects by projecting entities from entity space to relation space and gets various relation specific embeddings for each entity. CTransR (Lin et al., 2015b) is an extension of TransR that considers correlations under each relation type by clustering diverse head-tail entity pairs into groups and learning distinct relation vectors for each group. TransD (Ji et al., 2015) is a more fine-grained model which constructs a dynamic mapping matrix for each entity-relation pair considering the diversity of entities and relations simultaneously. TranSparse (Ji et al., 2016) is proposed to deal with the heterogeneity and imbalance of knowledge graph for relations and entities. It represents transfer matrices with adaptive sparse matrices and sparse degrees for transfer matrices are determined by the number of entities linked by this relation.

These methods mostly consider the interaction from relations to entities and learn multiple representations for entities. But they learn general embeddings for relations and ignore the interaction from entities to relations.

KGEs that utilize extra information. Some KGEs learn embeddings utilizing not only training triples in a knowledge graph, but also extra information. RTransE (García-Durán et al., 2015), PTransE (Lin et al., 2015a) and CVSM (Neelakantan et al., 2015) utilize the path rules as additional constrains to improve embeddings. (Wang et al., 2015)

considers three types of physical rules and one logical rule, and formulates the inference in knowledge graph as an integer linear programming problem.

(Krompaß et al., 2015) and TKRL (Xie et al., 2016) propose that the hierarchical type information of entities is of great significance for the representation learning in knowledge graphs. (Krompaß et al., 2015) regards entity types as hard constraints in latent variable models for KGs and TKRL regards hierarchical types as projection matrices for entities.

3. CrossE: Model Description

In this section we provide details of our model CrossE. Our model simulates crossover interactions between entities and relations by learning an interaction matrix to generate multiple specific interaction embeddings.

In our method, each entity and relation is represented by multiple embeddings: (a) a general embedding, which preserves high-level properties, and (b) multiple interaction embeddings, which preserve specific properties as results of crossover interactions. The interaction embeddings are obtained through Hadamard products between general embeddings and an interaction matrix C.

We denote a knowledge graph as , where and are the set of entities, relations and triples respectively. The number of entities is , the number of relations is , and the dimension of embeddings is . Bold letters denote embeddings. is the matrix of general entity embeddings with each row representing an entity embedding. Similarly, is the matrix of general relation embeddings. is the interaction matrix with each row related to a specific relation.

The basic idea of CrossE is illustrated in Figure 2. The general embeddings (E for entities and R for relations) and interaction matrix C are represented in the shaded boxes. The interaction embeddings and are results of crossover interactions between entities and relations, which are fully specified by the interaction operation on general embeddings. Thus E, R, and C are parameters that need to be learned, while and are not.

Figure 2. Overview of CrossE. Crossover interactions between general embeddings (E and R) and interaction matrix (C) resulting in interaction embeddings (unshaded boxes).

We now explain the score function and training objective of CrossE. Head entity h, tail entity t, and relation r correspond to high dimensional ‘one-hot’ index vectors , and respectively. Learned general embeddings of h, r and t are written as:


In CrossE, we define a score function for each triple such that valid triples receive high scores and invalid triples receive low scores. The score function has four parts and we describe them next.

  1. [wide]

  2. Interaction Embedding for Entities. To simulate the effect from relations to head entities, we define the interaction operation applied to a head entity as:


    , where denotes Hadamard product, an element-wise operator that has been proved to be effective and widely used in previous methods, such as (Nickel et al., 2016b) and (Trouillon et al., 2016). We call the interaction embedding of . Here, is a relation specific variable which we get from the interaction matrix C as in (3). As depends on relation , the number of interaction embeddings of are the same as relations.

  3. Interaction Embedding for Relations. The second interaction operation is applied to relations so as to simulate the effects from head entities. This interaction operation is defined in (4). Similar to (2), this is the Hadamard product of and r and we call the interaction embedding of . For each head entity, there is an interaction embedding of .

  4. Combination Operator. The third step is to get the combined representation and we formulate it in a nonlinear way:


    , where

    is a global bias vector.

    , in which the output is bounded from to . It is used to ensure the combined representation share the same distribution interval (both negative values and positive values) with the entity representation.

  5. Similarity Operator. The fourth step is to calculate the similarity between the combined representation and the general tail entity representation t:


    , where dot product is used to calculate the similarity and is a nonlinear function to constrain the score .

The overall score function is as follows:


To evaluate the effectiveness of the crossover interactions, we devise a simplified CrossE called CrossE by removing the interaction embeddings and using only the general embeddings in the score function:


Loss function.

We formalize a log-likelihood loss function with negative sampling as the objective for training:

Here, is the bag of positive examples with label and negative examples with label generated for . The label of example is given by . For , positive examples are and negative examples are , where . The factor controls the L2 regularization of model parameters . The training objective is to minimize , and we applied a gradient-based approach during training.

Number of Parameters. The total number of parameters for CrossE is , as there are general embeddings, additional embeddings from the interaction matrix, and one bias term. Note that interaction embeddings are fully specified by these parameters. While predicting head entities, we model the task as the inverse of tail entity prediction, e.g., (). In such cases, we need more embeddings for inverse relations. Since in most knowledge graphs, this does not add a lot of extra parameters.

The main benefits of CrossE. Compared to existing KGEs, CrossE’s benefits are as follows: (1) For an entity and a relation, the representations used during a specific triple inference are interaction embeddings (not general embeddings), which simulate the selection of different information for different triple predictions. (2) Multiple interaction embeddings for each entity and relation provide richer representations and generalization capability. We argue that they are capable of capturing different latent properties depending on the context of interactions. This is because each interaction embedding can select different similar entities and relations when involving different triples. (3) CrossE learns one general embedding for each entity and relation, and uses them to generate interaction embeddings. This results in much less extra parameters than learning multiple independent embeddings for each entity and relation.

4. Explanations for Predictions

In this section, we describe how we generate explanations for predicted triples. Explanations are valuable when KGEs are implemented in real applications, as they help improve the reliability of and people’s trust on predicted results. The balance between achieving high prediction accuracy and giving explanations has already attracted research attention in other areas, such as recommender systems (Heckel et al., 2017; Wang et al., 2018).

We first introduce the motivation of our explanation scheme, followed by our embedding-based path-searching algorithm.

4.1. Background

Similar to inference chains and logical rules, meaningful paths from to can be regarded as explanations for a predicted triple . For example, in Figure 1, the fact that X is father of M can be inferred by and .


The right-hand side of the implication ”” is called conclusion and the left-hand side is premise. The premise is an explanation for the conclusion. In the above example, the path is one explanation for the triple .

Searching paths between head entity and tail entity is the first step for giving explanations for a triple. There are multiple works focusing on mining such paths as rules or features for prediction, e.g., AMIE+ (Galárraga et al., 2015) and PRA (Lao et al., 2011), in which the paths are searched and pruned based on random walks and statistical significance. An important aspect of efficient path searching is the volume of search space. Selecting candidate start entities and relations is the key point of reducing search space. Good embeddings can be useful for candidate selection with less effects on path searching results, because they are supposed to capture the similarity semantics of entities and relations. Thus giving explanations for predicted triples, by searching for reliable paths based on embeddings, not only improves the reliability of predicted results, but also provides a new perspective of evaluating embedding qualities.

In this paper, the explanation reliability for triple is evaluated with the number of similar structures in knowledge graph, on which the inferences are mainly based, as noted by (Nickel et al., 2016a). Similar structures contain same relations but different specific entities.

Figure 3. Similar structures for the example subgraph in Figure 1.

For example, the left and right subgraph in Figure 3 have similar structures, as they both contain three entities , a triple , and a path . Thus the left subgraph is a support for with path explanation and vice versa. They support the reasonable existence and path explanation of each other. In general, for an explanation, the more similar structure supports it has, the more reliable it is.

4.2. Embedding-based explanation search

During the embedding-based explanation search, we first select candidate entities and relations based on embedding similarity to reduce the search space before generating explanations for a triple . The candidate selection is related to the quality of embeddings and will directly affect final explanations. We assume the similarity of vector embeddings are related to Euclidean distance and matrix embeddings are related to Frobenuis norm. Then based on selected candidates, we do exhaustive search for explanations which involves (1) searching for closed-paths from to as explanations and (2) searching for similar structures of the explanations as supports. More specifically, there are four main steps:

  • [wide]

  • Step 1: Search for similar relations. Output top relations similar to , denoted by the set , as possible first steps for path search. This step helps to prune unreasonable paths. For example, the path doesn’t indicate the relationship , even if it may have a lot of supports, because is not relevent to the inference of . To avoid such meaningless paths, the search is constrained to begin with relations similar to , which are more likely to describe the same aspect of an entity.

  • Step 2: Search for paths between and . Output a set of path . For simplicity, we consider paths including one or two relations as in (Yang et al., 2015). Thus there are six types of paths corresponding to six similar structures as shown in Table 2. The six possible paths are: , , , , , . Here, and denote any entity and relation in KG. To search for paths with length two, we apply a bi-direction search strategy. Taking as an example, we first find two entity sets and . Then we get the paths via intersection entities of and , .

  • Step 3: Search similar entities. Find top similar entities for , denoted by the set . Then check the corresponding tail entity of in the KG, where . The tail checking results depend on the quality of selected similar entities. Therefore the more capable a KGE is in capturing similarity between entities, the more likely it is that exists.

  • Step 4: Search for similar structures as supports. Output supports for path from step 2 according to the similar entities from step 3. If , path is an explanation for and is a support for this explanation. We only regard paths with at least one support in knowledge graph as explanations.

We summarize the process of embedding-based explanation search in Algorithm 1.

0:  Knowledge graph , relation and entity embeddings R and E
0:  Explanations for and their supports.
1:  Explanation set , Support set
2:  Select the set of top similar relations for
3:  Search the corresponding path set for each type: direct search for similar structure type 1 and type 2, and bidirectional search for type 3, type 4, type 5, and type 6.
4:  Select the set of top similar entities for
5:  for  do
6:     if  and  then
9:     end if
10:  end for
Algorithm 1 Search for explanations for predicted triple

In order to make the notion of explanations and supports more clear, in Table 2, we give real examples with path explanations and their supports based on CrossE embeddings (Section 5.2 provides details about the implementation). For each type of path, we list a predicted triple with this kind of path explanation and one corresponding support from embedding-based explanation search results of CrossE. It is expected that the reliability of a predicted triple increases when end-users are also provided with explanations and their supports in the form of similar path structures.

5. Experimental Evaluation

We use three popular benchmark datasets: WN18 and FB15k introduced in (Bordes et al., 2013), and FB15k-237 proposed by (Toutanova and Chen, 2015). They are either subset of WordNet (Miller, 1995), a large lexical knowledge base about English, or Freebase (Bollacker et al., 2008), a huge knowledge graph describing general facts in the world. The details of these datasets are given in Table 1.

Dataset Train Set Validation Set Test Set
WN18 40,943 18 141,442 5,000 5,000
FB15k 14,951 1,345 483,142 50,000 59,071
FB5k-237 14,541 237 272,115 17,535 20,466
Table 1. Datasets statistics.
Head entity Relation Tail entity
Type 1 Predicted triple Mel Gibson award nominations Best Director
Explanation Mel Gibson awards won Best Director
Support Vangelis award nominations Best Original Musical
Vangelis awards won Best Original Musical
Type 2 Predicted triple Aretha Franklin influenced Kings of Leon
Explanation Kings of Leon influenced by Aretha Franklin
Support Michael Jackson influenced Lady Gaga
Lady Gaga influenced by Michael Jackson
Type 3 Predicted triple Cayuga County containedby New York
Explanation Auburn capital of Cayuga County
Auburn containedby New York
Support Onondaga County containedby New York
Syracuse capital of Onondaga County
Syracuse containedby New York
Type 4 Predicted triple South Carolina country USA
Explanation Columbia state South Carolina
United States of America contains Columbia
Support Mississippi country USA
Jackson state Mississippi
United States of America contains Jackson
Type 5 Predicted triple World War I entity involved German Empire
Explanation World War I commanders Erich Ludendorff
Erich Ludendorff military commands German Empire
Support Falklands War entity involved United Kingdom
Falklands War commanders Margaret Thatcher
Margaret Thatcher military commands United Kingdom
Type 6 Predicted triple Northwestern University major field of study Computer Science
Explanation Northwestern University specialization Artificial intelligence
Computer Science specialization Artificial intelligence
Support Stockholm University major field of study Philosophy
Stockholm University specialization Political_philosoph
Philosophy specialization Political_philosoph
Table 2. Six similar structures with real examples of explanations and supports for predictions made by CrossE.

We now explain our evaluation on two main tasks: (a) link prediction, and (b) generating explanations for predicted triples.

5.1. Evaluation I: Link Prediction

In this section, we evaluate embeddings on canonical link prediction tasks, which contain two subtasks, one is tail entity prediction and the other is head entity prediction .

5.1.1. Evaluation Metrics

The link prediction evaluation follows the same protocol as previous works, in which all entities in the dataset are candidate predictions. During head entity prediction for , we replace with each entity in the dataset and calculate their scores according to the score function (7). Then we rank the scores in descending order and regard the rank of as head entity prediction result. The tail entity prediction is done similarly.

Aggregating head and tail entity prediction ranks of all test triples, there are three evaluation metrics:

Mean Rank(MR), Mean Reciprocal Rank(MRR) and Hit@N. Hit@N is the proportion of ranking scores within of all test triples. MRR is similar to MR but more immune to extremely bad cases. Similar to most recent works, we evaluate CrossE on MRR and Hit@1, Hit@3 and Hit@10. We express both MRR and Hit@k scores as percentages.

We also apply filter and raw settings. In filter setting, we filter all candidate triples in train, test or validation datasets before ranking, as they are not negative triples. Raw is the setting without filtering.

5.1.2. Implementation Details

and C

are initialized from the uniform distribution

as suggested in (Glorot and Bengio, 2010). b is initialized to zero. The positive samples for training triple are generated by retrieving all . Negative triples are generated randomly by sampling entities such that . The head entity prediction is transformed to in which is the inverse relation of . We generate inverse triples for each during training, as done in (Neelakantan et al., 2015; Lin et al., 2015a).

We implement our model using TensorFlow with Adam optimizer 

(Kingma and Ba, 2014) and dropout (Srivastava et al., 2014) rate applied to similarity operator in (6). The maximum training iteration is set to for all datasets. The configurations for the results of CrossE are as follows: number of negative examples , learning rate , dimension , regularizer parameter and batch-size for WN18; , , , and for FB15K; , , , and for FB15k-237. CrossE is trained with the same parameters.

5.1.3. Link Prediction Results

#parameters WN18 FB15k
Unstructured (Bordes et al., 2014a) 38.2 6.3
RESCAL(Nickel et al., 2011) 52.8 44.1
NTN (Socher et al., 2013) 66.1 41.4
SE (Bordes et al., 2011) 80.5 39.8
LFM (Jenatton et al., 2012) 81.6 33.1
TransH(Wang et al., 2014) 86.7 64.4
TransE(Bordes et al., 2013) 89.2 47.1
TransR(Lin et al., 2015b) 92.0 68.4
RTransE (García-Durán et al., 2015) - 76.2
TransD(Ji et al., 2015) 92.2 77.3
CTransR (Lin et al., 2015b) 92.3 70.3
KG2E (He et al., 2015) 93.2 74.0
STransE (Nguyen et al., 2016) 93.4 79.7
DistMult (Yang et al., 2015) 93.6 78.3
TranSparse (Ji et al., 2016) 93.9 79.9
PTransE-MUL (Lin et al., 2015a) - 77.7
PTransE-RNN (Lin et al., 2015a) - 82.2
PTransE-ADD (Lin et al., 2015a) - 84.6
ComplEx (Trouillon et al., 2016) 94.7 84.0
ANALOGY (Liu et al., 2017) 94.7 85.4
HOlE (Nickel et al., 2016b) 94.9 73.9
CrossE 95.0 87.5
CrossE 87.3 72.7
Table 3. HIT@10(filter) results of 22 KGEs on WN18 and FB15k. ”-” indicates missing results from original paper. Boldface scores are the best results among all methods. Underlined scores are the better ones between CrossE and CrossE. In parameters column, and denote number of entities and relations respectively, is the embedding dimension, is the number of hidden nodes of a neural network, is the sparse degree of matrix.

In Table 3, we show the results of CrossE and 21 baselines with their published results of Hit@10(filter) on WN18 and FB15k from the original papers111We follow the established practice in the KGE literature to compare link-prediction performance with published results on the same benchmarks.. This is the most applied evaluation metric on the most commonly used datasets in prior works, as we want to compare CrossE with as many baselines as possible. For fair comparison, models utilizing external information, such as text, are not considered as baselines.

All CrossE results that are significantly different from the second-best results are marked with . We used one-sample proportion test at the 5% p-value level for testing the statistical significances222Similar to (Liu et al., 2017), we conducted the proportion tests on the Hit@k scores but not on MRR. Proportion tests cannot be applied to non-proportional scores such as MRR..

In Table 4, we compare CrossE with seven baseline methods whose MRR, Hit@1 and Hit@3 results are available. The results of TransE, DistMult and ComplEx are from (Trouillon et al., 2016), RESCAL and HOLE from (Nickel et al., 2016b), ANALOGY from (Liu et al., 2017) and R-GCN from (Schlichtkrull et al., 2018).

WN18 FB15k
MRR Hit@ MRR Hit@
filter/raw 1 3 filter/raw 1 3
RESCAL(Nickel et al., 2011) 89.0 / 60.3 84.2 90.4 35.4 /18.9 23.5 40.9
TransE(Bordes et al., 2013) 45.5 / 33.5 8.9 82.3 38.0 / 22.1 23.1 47.2
DistMult(Yang et al., 2015) 82.2 / 53.2 72.8 91.4 65.4 / 24.2 54.6 73.3
HOlE(Nickel et al., 2016b) 93.8 / 61.6 93.0 94.5 52.4 / 23.2 40.2 61.3
ComplEx(Trouillon et al., 2016) 94.1 / 58.7 93.6 94.5 69.2 / 24.2 59.9 75.9
ANALOGY(Liu et al., 2017) 94.2/65.7 93.9 94.4 72.5 / 25.3 64.6 78.5
R-GCN (Schlichtkrull et al., 2018) 81.9 / 56.1 69.7 92.9 69.6 / 26.2 60.1 76.0
CrossE 83.0 / 57.0 74.1 93.1 72.8/26.7 63.4 80.2
CrossE 46.9 / 39.6 21.7 70.6 46.4 / 25.4 28.4 61.9
Table 4. Link prediction results on WN18 and FB15k.

In Table 5, we separately show the link prediction results of CrossE on FB15k-237, a recently proposed more challenging dataset. We gather as many baselines as possible for FB15k-237 and list them in Table 5. The results of DistMult, Node+LinkFeat and Neural LP are from (Yang et al., 2017), and R-GCN and R-GCN+ from the original paper (Schlichtkrull et al., 2018). For ComplEx and ANALOGY, we use the published code333 in (Liu et al., 2017) to decide the best parameters based on grid search among embedding dimension and regularizer weight with six negative samples as in the ANALOGY paper (Liu et al., 2017) 444Our parameter search includes the same range of values used in the original papers, and the best parameters obtained for FB15.. The parameters used for ComplEx and ANALOGY are , and .

MRR MRR Hit@1 Hit@3 Hit@10
(raw) (filter) (filter) (filter) (filter)
DistMult(Yang et al., 2015) - 25.0 - - 40.8
Node+LinkFeat (Toutanova and Chen, 2015) - 23.0 - - 34.7
Neural LP(Yang et al., 2017) - 24.0 - - 36.2
R-GCN (Schlichtkrull et al., 2018) 15.8 24.8 15.3 25.8 41.4
R-GCN+ (Schlichtkrull et al., 2018) 15.6 24.9 15.1 26.4 41.7
ComplEx (Trouillon et al., 2016) 12.0 22.1 13.2 24.4 40.8
ANALOGY (Liu et al., 2017) 11.8 21.9 13.1 24.0 40.5
CrossE 17.7 29.9 21.1 33.1 47.4
CrossE 6.40 11.0 6.7 11.7 19.8
Table 5. Link prediction results on FB15k-237.

For WN18 (Table 3), CrossE achieves Hit@10 results that are comparable to the best baselines. On the same dataset (Table 4), CrossE achieves better results than majority of baselines on MRR, Hit@1 and Hit@3. With only relations, WN18 is a simpler dataset compared to FB15k. We can see in Table 4 that each method performs well on WN18 and much better than FB15k. For example, the scores of Hit@3 on WN18 with all methods are above while the best score on FB15k is just around .

For FB15k (Table 3 and Table 4) we see that CrossE achieves state-of-the-art results on majority evaluation metrics, including MRR, Hit@3, and Hit@10. These results support that CrossE can encode the diverse relations in knowledge graph better, since FB15k is a complex linked dataset with relations and is more challenging than WN18. Compared to ANALOGY, which achieves best results on Hit@1, CrossE performs better on Hit@3 and Hit@10, two metrics that we think are better for datasets with diverse types of relations. In FB15k, there are 1-to-1 relations, 1-to-many, many-to-1 and many-to-many relations. Based on the Open World Assumption, the number of correct answer for link prediction on 1-to-many, many-to-1 and many-to-many relations might be more than even under the filter setting.

For FB15k-237 (Table 5), CrossE achieves state-of-the-art results and significant improvements, compared with all baselines on all evaluation metrics. Compared to FB15k, FB15k-237 removes redundant triples causing inverse relation information leakage which can be easily captured by simpler approaches (Toutanova and Chen, 2015). Thus without properly encoding the diverse semantics in the knowledge graph, a KGE can’t achieve good performance on FB15k-237. The significant performance improvement achieved by CrossE indicates that it is more capable of capturing and utilizing the complex semantics in knowledge graph during prediction.

Figure 4. Evaluation results on generating explanations with different KGEs. 4 shows Recall and AvgSupport with , . 4 shows Recall and AvgSupport for CrossE with . 4 shows AvgSupport with and .

Compared to the simpler model CrossE, CrossE performs much better on all three datasets(Table 3, Table 4, and Table 5). As the only difference between them is whether there are crossover interactions between entities and relations or not, the huge performance difference shows the importance of modeling crossover interactions, which is common during inference on knowledge graphs with diverse topics.

To show which type of relations CrossE can encode better, Table 6 compares Hit@10 (filter) results on FB15k after mapping different relation types, 1-to-1, 1-to-many, many-to-1 and many-to-many, represented as 1-1, 1-N, N-1 and N-N respectively. Separating rule of relation types follows (Bordes et al., 2013).

1-1 1-N N-1 N-N
(head/tail) (head/tail) (head/tail) (head/tail)
Unstructured(Bordes et al., 2014b) 34.5/34.3 2.5/4.2 6.1/1.9 6.6/6.6
SE(Bordes et al., 2011) 35.6/34.9 62.6/14.6 17.2/68.3 37.5/41.3
SME(linear)(Bordes et al., 2014b) 35.1/32.7 53.7/14.9 19.0/61.6 40.3/43.3
SME(Bilinear)(Bordes et al., 2014b) 30.9/28.2 69.6/13.1 19.9/76.0 38.6/41.8
TransE(Bordes et al., 2013) 43.7/43.7 65.7/19.7 18.2/66.7 47.2/50.0
TransH(Wang et al., 2014) 66.8/65.5 87.6/39.8 28.7/83.3 64.5/67.2
TransD(Ji et al., 2015) 86.1/85.4 95.5/50.6 39.8/94.4 78.5/81.2
TransR(Lin et al., 2015b) 78.8/79.2 89.2/37.4 34.1/90.4 69.2/72.1
CTransR(Lin et al., 2015b) 81.5/80.8 89.0/38.6 34.7/90.1 71.2/73.8
CrossE 88.2/87.7 95.7/75.1 64.2/92.3 88.1/90.8
CrossE 78.6/81.6 85.1/54.2 45.3 /85.8 71.7/76.7
Table 6. Hit@10 on FB15k by mapping to different relation types.

From Table 6, we see that CrossE significantly outperforms all other embedding methods except in the tail prediction for N-1 relations. On more difficult tasks with more correct answers including head prediction for N-1 relations, tail prediction for 1-N relations and both head and tail prediction for N-N relations, CrossE achieves significant improvement, with in average. As a conclusion, CrossE performs more stably than other methods on different types of relations.

5.2. Evaluation II: Generating Explanations

5.2.1. Evaluation

Aggregating all path explanations and their supports from embedding results, we evaluate the capability of a KGE to generate explanations from two perspectives: (1) the fraction of triples (out of all test triples) that KGE can give explanations for (Recall), (2) the average support among the triples for which it can find explanations (AvgSupport). We argue that higher the AvgSupport for a triple, the more reliable the explanation will be.

Generally, a KGE that can generate better explanations will achieve higher Recall and AvgSupport when selecting the same number of similar entities () and relations ().

5.2.2. Experimental Details

For the explanation experiment, we use the FB15k dataset. We choose two KGEs as baselines. One is the popular translation-based embedding method TransE, and the other is the linear-mapping based embedding method ANALOGY which achieves one of the best results on link prediction tasks.

The embeddings of CrossE used for searching explanations are those used in the link prediction experiment. We re-implement TransE and the implementation of ANALOGY is from (Liu et al., 2017). During the similar entity and relation selection, embeddings for head entities and relations when they involve specific triples are used, which are interaction embeddings in CrossE and general embeddings in TransE and ANALOGY.

5.2.3. Explanation Results

The results are summarized in Figure 4. In Figure 4, we see that when selecting ten similar entities and three similar relations, the recall of three methods varies from to and the AvgSupport varies from to . ANALOGY achieves the best results on Recall while it can give only a few examples for each explanation, quantified by its AvgSupport performance. TransE achieves the lowest recall but its AvgSupport for each triple is about 10 times more than ANALOGY. CrossE achieves the second best result on recall and has about times more AvgSupport than ANALOGY. From the perspective of giving reliable explanations for predicted triples, CrossE outperforms the baselines.

The explanation and similar structures search are based on the similar entity and relation selection results. CrossE generates multiple crossover embeddings for entities and relations. Based on the interaction embeddings, the similar items for each entity or relation are different when it involves different triples. This makes CrossE more efficient in selecting similar items, and will result in giving explanations better. Based on general embeddings, similar items selection results will always be the same, regardless of the triple-specific context. In our opinion, this is why the AvgSupport of CrossE is much higher than of TransE and ANALOGY.

In Figure 4, recall increases slightly when selecting more similar relations while AvgSupport increases a lot. It is the same when increasing the number of selected similar entities. Figure 4 shows that AvgSupport for CrossE increases much faster than that for TransE and ANALOGY. This also demonstrates the effectiveness of CrossE at selecting similar entities and relations.

To figure out what types of paths and similar structures are easier to find for compared models, in Figure 5, for each model, we show the share of AvgSupport for each type of similar structures.

Figure 5. Share of AvgSupport for six similar structure types.

We can see that among all types of similar structures, type 5 is the one that both TransE and CrossE have the most AvgSupport for. From our point of view, type 5 is the most natural path, where two relations along the path are in the same direction with the relation between and . Thus type 5 is more likely to be constructed when building a knowledge graph. Although the shares of type 1 and 2 are high for ANALOGY, the AvgSupport values are very low (Figure 4).

In summary, we can state that the design of KGE models and their vector-space assumptions will affect the type of path-explanations they can provide for.

From these two evaluation tasks, we can conclude that the capability of KGE on link prediction and explanations are not directly related. A method that performs well on link prediction may not necessarily be good at giving explanations. Thus the balance between prediction accuracy and giving explanations is important.

6. Conclusion

In this paper, we described a new knowledge graph embedding named CrossE. CrossE successfully captures crossover interactions between entities and relations when modeling knowledge graphs and achieves state-of-the-art results on link prediction task with complex linked datasets. We believe that improving the reliability of embedding method is as important as achieving high-accuracy prediction. This work is a first step for explaining the prediction results. There are still much work to do with explanations, such as how to enable KGEs to give explanations for all predicted triples. In our future work, we will focus on improving the capability of KGEs in predict missing triples and also giving more reliable explanations.

This work is funded by NSFC 61673338/61473260, and supported by Alibaba-Zhejiang University Joint Institute of Frontier Technologies and SNF Sino Swiss Science and Technology Cooperation Programme program under contract RiC 01-032014.


  • (1)
  • Bollacker et al. (2008) Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. Proceedings of SIGMOD (2008), 1247–1250.
  • Bordes et al. (2014a) Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014a. A semantic matching energy function for learning with multi-relational data - Application to word-sense disambiguation. Machine Learning 94, 2 (2014), 233–259.
  • Bordes et al. (2014b) Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014b. A semantic matching energy function for learning with multi-relational data - Application to word-sense disambiguation. Machine Learning 94, 2 (2014), 233–259.
  • Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-relational Data. Proceedings of NIPS (2013), 2787–2795.
  • Bordes et al. (2011) Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning Structured Embeddings of Knowledge Bases. Proceedings of AAAI (2011).
  • Galárraga et al. (2015) Luis Galárraga, Christina Teflioudi, Katja Hose, and Fabian M. Suchanek. 2015. Fast rule mining in ontological knowledge bases with AMIE+. VLDB J. 24, 6 (2015), 707–730.
  • García-Durán et al. (2015) Alberto García-Durán, Antoine Bordes, and Nicolas Usunier. 2015. Composing Relationships with Translations. Proceedings of EMNLP (2015), 286–290.
  • Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. Proceedings of AISTATS (2010), 249–256.
  • He et al. (2015) Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. 2015. Learning to Represent Knowledge Graphs with Gaussian Embedding. Proceedings of CIKM (2015), 623–632.
  • Heckel et al. (2017) Reinhard Heckel, Michail Vlachos, Thomas P. Parnell, and Celestine Dünner. 2017. Scalable and Interpretable Product Recommendations via Overlapping Co-Clustering. Procedding of ICDE (2017), 1033–1044.
  • Hinton et al. (1986) G. E. Hinton, J. L. McClelland, and D. E. Rumelhart. 1986. Distributed Representations. Parallel distributed processing: explorations in the microstructure of cognition (1986).
  • Jenatton et al. (2012) Rodolphe Jenatton, Nicolas Le Roux, Antoine Bordes, and Guillaume Obozinski. 2012. A latent factor model for highly multi-relational data. Proceddings of NIPS (2012), 3176–3184.
  • Ji et al. (2015) Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge Graph Embedding via Dynamic Mapping Matrix. Proceedings of ACL (2015), 687–696.
  • Ji et al. (2016) Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2016. Knowledge Graph Completion with Adaptive Sparse Transfer Matrix. Proceedings of AAAI (2016), 985–991.
  • Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. CoRR abs/1412.6980 (2014).
  • Krompaß et al. (2015) Denis Krompaß, Stephan Baier, and Volker Tresp. 2015. Type-Constrained Representation Learning in Knowledge Graphs. Proceddings of ISWC (2015), 640–655.
  • Lao et al. (2011) Ni Lao, Tom M. Mitchell, and William W. Cohen. 2011. Random Walk Inference and Learning in A Large Scale Knowledge Base. In EMNLP. ACL, 529–539.
  • Lin et al. (2015a) Yankai Lin, Zhiyuan Liu, Huan-Bo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling Relation Paths for Representation Learning of Knowledge Bases. Proceedings of EMNLP (2015), 705–714.
  • Lin et al. (2015b) Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning Entity and Relation Embeddings for Knowledge Graph Completion. Proceedings of AAAI (2015), 2181–2187.
  • Liu et al. (2017) Hanxiao Liu, Yuexin Wu, and Yiming Yang. 2017. Analogical Inference for Multi-relational Embeddings. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017. 2168–2178.
  • Miller (1995) George A. Miller. 1995. WordNet: A Lexical Database for English. Commun. ACM 38, 11 (1995), 39–41.
  • Neelakantan et al. (2015) Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional Vector Space Models for Knowledge Base Completion. Proceedings of ACL (2015), 156–166.
  • Nguyen et al. (2016) Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016. STransE: a novel embedding model of entities and relationships in knowledge bases. In HLT-NAACL. The Association for Computational Linguistics, 460–466.
  • Nickel et al. (2016a) Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016a. A Review of Relational Machine Learning for Knowledge Graphs. Proc. IEEE 104, 1 (2016), 11–33.
  • Nickel et al. (2016b) Maximilian Nickel, Lorenzo Rosasco, and Tomaso A. Poggio. 2016b. Holographic Embeddings of Knowledge Graphs. Proceedings of AAAI (2016), 1955–1961.
  • Nickel et al. (2011) Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A Three-Way Model for Collective Learning on Multi-Relational Data. Proceedings of ICML (2011), 809–816.
  • Ristoski and Paulheim (2016) Petar Ristoski and Heiko Paulheim. 2016. RDF2Vec: RDF Graph Embeddings for Data Mining. Procedings of ISWC (2016), 498–514.
  • Schlichtkrull et al. (2018) Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling Relational Data with Graph Convolutional Networks. Proceddings of ESWC (2018).
  • Shi and Weninger (2017) Baoxu Shi and Tim Weninger. 2017. ProjE: Embedding Projection for Knowledge Graph Completion. Proceedings of AAAI (2017), 1236–1242.
  • Socher et al. (2013) Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning With Neural Tensor Networks for Knowledge Base Completion. Prodeddings of NIPS (2013), 926–934.
  • Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15(Jun) (2014), 1929–1958.
  • Suchanek et al. (2007) Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. Proceedings of WWW (2007), 697–706.
  • Szumlanski and Gomez (2010) Sean R. Szumlanski and Fernando Gomez. 2010. Automatically acquiring a semantic network of related concepts. (2010), 19–28.
  • Toutanova and Chen (2015) Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. 57–66.
  • Trouillon et al. (2016) Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex Embeddings for Simple Link Prediction. Proceedings ICML (2016), 2071–2080.
  • Wang et al. (2015) Quan Wang, Bin Wang, and Li Guo. 2015. Knowledge Base Completion Using Embeddings and Rules. Proceedings of IJCAI (2015), 1859–1866.
  • Wang et al. (2018) Xiang Wang, Xiangnan He, Fuli Feng, Liqiang Nie, and Tat-Seng Chua. 2018. TEM: Tree-enhanced Embedding Model for Explainable Recommendation. In Proceedings of WWW. 1543–1552.
  • Wang et al. (2014) Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge Graph Embedding by Translating on Hyperplanes. Proceedings of AAAI (2014), 1112–1119.
  • Xiao et al. (2016) Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. TransG : A Generative Model for Knowledge Graph Embedding. Proceedings of ACL (2016).
  • Xie et al. (2016) Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016. Representation Learning of Knowledge Graphs with Hierarchical Types. Proceedings of IJCAI (2016), 2965–2971.
  • Yang et al. (2015) Bishan Yang, Wen tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. Proceedings of ICLR (2015).
  • Yang et al. (2017) Fan Yang, Zhilin Yang, and William W. Cohen. 2017. Differentiable Learning of Logical Rules for Knowledge Base Reasoning. In NIPS. 2316–2325.
  • Yin et al. (2016) Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural Generative Question Answering. In Proceedings of IJCAI.
  • Zhang (2017) Wen Zhang. 2017. Knowledge Graph Embedding with Diversity of Structures. Proceedings WWW Companion (2017), 747–753.