SimplE Embedding for Link Prediction in Knowledge Graphs

02/13/2018 ∙ by Seyed Mehran Kazemi, et al. ∙ The University of British Columbia 0

The aim of knowledge graphs is to gather knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs are far from complete. To address the incompleteness of the knowledge graphs, link prediction approaches have been developed which make probabilistic predictions about new links in a knowledge graph given the existing links. Tensor factorization approaches have proven promising for such link prediction problems. In this paper, we develop a simple tensor factorization model called SimplE, through a slight modification of the Polyadic Decomposition model from 1927. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of expert knowledge in terms of logical rules can be incorporated into these embeddings through weight tying. We prove SimplE is fully-expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

SimplE

SimplE Embedding for Link Prediction in Knowledge Graphs


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

During the past two decades, several knowledge graphs (KGs) containing (perhaps probabilistic) facts about the world have been constructed. These KGs have applications in several fields including search, question answering, natural language processing, recommendation systems, etc. Due to the enormous number of facts that could be asserted about our world and the difficulty in accessing and storing all these facts, KGs are incomplete. However, it is possible to predict new links in a KG based on the existing ones.

Link prediction and several other related problems aiming at reasoning with entities and relationships are studied under the umbrella of statistical relational learning (SRL) Getoor and Taskar (2007); Nickel et al. (2016a); De Raedt et al. (2016). The problem of link prediction for KGs is also known as knowledge graph completion. A KG can be represented as a set of triples111Triples are complete for relations. They are sometimes written as or .. The problem of KG completion can be viewed as predicting new triples based on the existing ones.

Tensor factorization approaches have proved to be an effective SRL approach for KG completion Nickel et al. (2012); Bordes et al. (2013b); Trouillon et al. (2016); Nguyen et al. (2016)

. These approaches consider embeddings for each entity and each relation. To predict whether a triple holds, they use a function which takes the embeddings for the head and tail entities and the relation as input and outputs a number indicating the predicted probability. Details and discussions of these approaches can be found in several recent surveys

Nguyen (2017); Wang et al. (2017).

One of the first tensor factorization approaches is the canonical Polyadic (CP) decomposition Hitchcock (1927). This approach learns one embedding vector for each relation and two embedding vectors for each entity, one to be used when the entity is the head and one to be used when the entity is the tail. The head embedding of an entity is learned independently of (and is unrelated to) its tail embedding. This independence has caused CP to perform poorly for KG completion Trouillon et al. (2017). In this paper, we develop a tensor factorization approach based on CP that addresses the independence among the two embedding vectors of the entities. Due to the simplicity of our model, we call it SimplE (Simple Embedding).

We show that SimplE: 1- can be considered a bilinear model, 2- is fully expressive, 3- is capable of encoding background knowledge into its embeddings through parameter sharing (aka weight tying), and 4- performs very well empirically despite (or maybe because of) its simplicity. We also discuss several disadvantages of other existing approaches. We prove that many existing translational approaches (see e.g., Bordes et al. (2013b); Ji et al. (2015); Wang et al. (2014); Nguyen et al. (2016)) are not fully expressive and we identify severe restrictions on what they can represent. We also show that the function used in ComplEx Trouillon et al. (2016, 2017), a state-of-the-art approach for link prediction, involves redundant computations.

2 Background and Notation

We represent vectors with lowercase letters and matrices with uppercase letters. Let be vectors of length . We define , where , , and represent the th element of , and respectively. That is, where represents element-wise (Hadamard) multiplication and represents dot product.

represents an identity matrix of size

. represents the concatenation of vectors , , and .

Let  and  represent the set of entities and relations respectively. A triple is represented as , where is the head, is the relation, and is the tail of the triple. Let represent the set of all triples that are true in a world (e.g., ), and represent the ones that are false (e.g., ). A knowledge graph  is a subset of . A relation is reflexive on a set of entities if for all entities . A relation is symmetric on a set of entities if for all pairs of entities , and is anti-symmetric if . A relation is transitive on a set of entities if for all . The inverse of a relation , denoted as , is a relation such that for any two entities and , .

An embedding is a function from an entity or a relation to one or more vectors or matrices of numbers. A tensor factorization model defines two things: 1- the embedding functions for entities and relations, 2- a function taking the embeddings for , and as input and generating a prediction of whether is in or not. The values of the embeddings are learned using the triples in a . A tensor factorization model is fully expressive if given any ground truth (full assignment of truth values to all triples), there exists an assignment of values to the embeddings of the entities and relations that accurately separates the correct triples from incorrect ones.

3 Related Work

Translational Approaches define additive functions over embeddings. In many translational approaches, the embedding for each entity is a single vector and the embedding for each relation is a vector and two matrices and . The dissimilarity function for a triple is defined as (i.e. encouraging ) where represents norm of vector . Translational approaches having this dissimilarity function usually differ on the restrictions they impose on and . In TransE Bordes et al. (2013b), , . In TransR Lin et al. (2015b), . In STransE Nguyen et al. (2016), no restrictions are imposed on the matrices. FTransE Feng et al. (2016), slightly changes the dissimilarity function defining it as for a value of that minimizes the norm for each triple. In the rest of the paper, we let FSTransE represent the FTransE model where no restrictions are imposed over and .

Multiplicative Approaches define product-based functions over embeddings. DistMult Yang et al. (2015), one of the simplest multiplicative approaches, considers the embeddings for each entity and each relation to be and respectively and defines its similarity function for a triple as . Since DistMult does not distinguish between head and tail entities, it can only model symmetric relations. ComplEx Trouillon et al. (2016) extends DistMult by considering complex-valued instead of real-valued vectors for entities and relations. For each entity , let and represent the real and imaginary parts of the embedding for . For each relation , let and represent the real and imaginary parts of the embedding for . Then the similarity function of ComplEx for a triple is defined as , where and . One can easily verify that the function used by ComplEx can be expanded and written as . In RESCAL Nickel et al. (2011), the embedding vector for each entity is and for each relation is and the similarity function for a triple is , where represents the outer product of two vectors and vectorizes the input matrix. HolE Nickel et al. (2016b) is a multiplicative model that is isomorphic to ComplEx Hayashi and Shimbo (2017).

Deep Learning Approaches

generally use a neural network that learns how the head, relation, and tail embeddings interact. E-MLP

Socher et al. (2013) considers the embeddings for each entity to be a vector , and for each relation to be a matrix and a vector . To make a prediction about a triple , E-MLP feeds into a two-layer neural network whose weights for the first layer are the matrix and for the second layer are . ER-MLP Dong et al. (2014), considers the embeddings for both entities and relations to be single vectors and feeds into a two layer neural network. In Santoro et al. (2017)

, once the entity vectors are provided by the convolutional neural network and the relation vector is provided by the long-short time memory network, for each triple the vectors are concatenated similar to ER-MLP and are fed into a four-layer neural network. Neural tensor network (NTN)

Socher et al. (2013) combines E-MLP with several bilinear parts (see Subsection 5.4 for a definition of bilinear models).

4 SimplE: A Simple Yet Fully Expressive Model

In canonical Polyadic (CP) decomposition Hitchcock (1927), the embedding for each entity has two vectors , and for each relation has a single vector . captures ’s behaviour as the head of a relation and captures ’s behaviour as the tail of a relation. The similarity function for a triple is . In CP, the two embedding vectors for entities are learned independently of each other: observing only updates and , not and .

Example 1.

Let represent if a person likes a movie and represent who acted in which movie. Which actors play in a movie is expected to affect who likes the movie. In CP, observations about likes only update the vector of movies and observations about acted only update the vector. Therefore, what is being learned about movies through observations about acted does not affect the predictions about likes and vice versa.

SimplE takes advantage of the inverse of relations to address the independence of the two vectors for each entity in CP. While inverse of relations has been used for other purposes (see e.g., Lao and Cohen (2010); Lin et al. (2015a); Das et al. (2017)), using them to address the independence of the entity vectors in CP is a novel contribution.

Model Definition: SimplE considers two vectors as the embedding of each entity (similar to CP), and two vectors for each relation . The similarity function of SimplE for a triple is defined as , i.e. the average of the CP scores for and . In our experiments, we also consider a different variant, which we call SimplE-ignr. During training, for each correct (incorrect) triple , SimplE-ignr updates the embeddings such that each of the two scores and become larger (smaller). During testing, SimplE-ignr ignores and defines the similarity function to be .

Learning SimplE Models:

To learn a SimplE model, we use stochastic gradient descent with mini-batches. In each learning iteration, we iteratively take in a batch of positive triples from the

, then for each positive triple in the batch we generate negative triples by corrupting the positive triple. We use Bordes et al. (2013b)’s procedure to corrupt positive triples. The procedure is as follows. For a positive triple , we randomly decide to corrupt the head or tail. If the head is selected, we replace in the triple with an entity randomly selected from and generate the corrupted triple . If the tail is selected, we replace in the triple with an entity randomly selected from and generate the corrupted triple . We generate a labelled batch by labelling positive triples as and negatives as . Once we have a labelled batch, following Trouillon et al. (2016) we optimize the regularized negative log-likelihood of the batch: , where represents the parameters of the model (the parameters in the embeddings), represents the label of a triple, represents the similarity score for triple , is the regularization hyper-parameter, and

. While several previous works (e.g., TransE, TransR, STransE, etc.) consider a margin-based loss function,

Trouillon and Nickel (2017) show that the margin-based loss function is more prone to overfitting compared to log-likelihood.

5 Theoretical Analyses

In this section, we provide some theoretical analyses of SimplE and other existing approaches.

5.1 Fully Expressiveness

The following proposition establishes the full expressivity of SimplE.

1 0 0 0 1 0 0 0 1 0 0 0
0 1 0 0 0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0 0 0 1 0
0 0 0 1 0 0 0 1 0 0 0 1
1 1 1 1 0 0 0 0 0 0 0 0
0 0 0 0 1 1 1 1 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 1
Figure 1: s and s in the proof of Proposition 1.
Proposition 1.

For any ground truth over entities and relations containing true facts, there exists a SimplE model with embedding vectors of size that represents that ground truth.

Proof.

First, we prove the bound. With embedding vectors of size , for each entity we let the n-th element of if ( mod and otherwise, and for each relation we let the n-th element of if ( div and otherwise (see Fig 1). Then for each and , the product of and is everywhere except for the -th element. So for each entity , we set the -th element of to be if holds and otherwise.

Now we prove the bound. Let be zero (base of the induction). We can have embedding vectors of size for each entity and relation, setting the value for entities to and for relations to . Then is negative for every entities and and relation . So there exists embedding vectors of size that represents this ground truth. Let us assume for any ground truth where , there exists an assignment of values to embedding vectors of size that represents that ground truth (assumption of the induction). We must prove for any ground truth where , there exists an assignment of values to embedding vectors of size that represents this ground truth. Let be one of the true facts. Consider a modified ground truth which is identical to the ground truth with true facts, except that is assigned false. The modified ground truth has true facts and based on the assumption of the induction, we can represent it using some embedding vectors of size . Let where , and are the embedding vectors that represent the modified ground truth. We add an element to the end of all embedding vectors and set it to . This increases the vector sizes to but does not change any scores. Then we set the last element of to , to , and to . This ensures that for the new vectors, and no other score is affected. ∎

DistMult is not fully expressive as it forces relations to be symmetric. It has been shown in Trouillon et al. (2017) that ComplEx is fully expressive with embeddings of length at most . According to the universal approximation theorem Cybenko (1989); Hornik (1991), under certain conditions, neural networks are universal approximators of continuous functions over compact sets. Therefore, we would expect there to be a representation based on neural networks that can approximate any ground truth, but the number of hidden units might have to grow with the number of triples. Wang et al. (2018) prove that TransE is not fully expressive. Proposition 2 proves that not only TransE but also many other translational approaches are not fully expressive. The proposition also identifies severe restrictions on what relations these approaches can represent.

Proposition 2.

FSTransE is not fully expressive and has the following restrictions. If a relation is reflexive on , must also be symmetric on , If is reflexive on , must also be transitive on , and If entity has relation with every entity in and entity has relation with one of the entities in , then must have the relation with every entity in .

Proof.

For any entity and relation , let and . For a triple to hold, we should ideally have for some . We assume , , and are entities in .

A relation being reflexive on implies and . Suppose holds as well. Then we know . Therefore, , where . Therefore, must holds.

A relation being reflexive implies , , and . Suppose and hold. Then we know and . We can conclude , where . The above equality proves must hold.

Let have relation with . We know , , and . We can conclude , where . Therefore, must hold. ∎

Corollary 1.

Other variants of translational approaches such as TransE, FTransE, STransE, TransH Wang et al. (2014), and TransR Lin et al. (2015b) also have the restrictions mentioned in Proposition 2.

5.2 Incorporating Background Knowledge into the Embeddings

In SimplE, each element of the embedding vector of the entities can be considered as a feature of the entity and the corresponding element of a relation can be considered as a measure of how important that feature is to the relation. Such interpretability allows the embeddings learned through SimplE for an entity (or relation) to be potentially transferred to other domains. It also allows for incorporating observed features of entities into the embeddings by fixing one of the elements of the embedding vector of the observed value. Nickel et al. (2014) show that incorporating such features helps reduce the size of the embeddings.

Recently, incorporating background knowledge into tensor factorization approaches has been the focus of several studies. Towards this goal, many existing approaches rely on post-processing steps or add additional terms to the loss function to penalize predictions that violate the background knowledge Rocktäschel et al. (2014); Wang et al. (2015); Wei et al. (2015); Guo et al. (2016); Ding et al. (2018). Minervini et al. (2017) show how background knowledge in terms of equivalence and inversion can be incorporated into several tensor factorization models through parameter tying222Although their incorporation of inversion into DistMult is not correct as it has side effects.. Incorporating background knowledge by parameter tying has the advantage of guaranteeing the predictions follow the background knowledge for all embeddings. In this section, we show how three types of background knowledge, namely symmetry, anti-symmetry, and inversion, can be incorporated into the embeddings of SimplE by tying the parameters333Note that such background knowledge can be exerted on some relations selectively and not on the others. This is different than, e.g., DistMult which enforces symmetry on all relations. (we ignore the equivalence between two relations as it is trivial).

Proposition 3.

Let be a relation such that for any two entities and we have (i.e. is symmetric). This property of can be encoded into SimplE by tying the parameters to .

Proof.

If , then a SimplE model makes and positive. By tying the parameters to , we can conclude that and also become positive. Therefore, the SimplE model predicts . ∎

Proposition 4.

Let be a relation such that for any two entities and we have (i.e. is anti-symmetric). This property of can be encoded into SimplE by tying the parameters to the negative of .

Proof.

If , then a SimplE model makes and positive. By tying the parameters to the negative of , we can conclude that and become negative. Therefore, the SimplE model predicts . ∎

Proposition 5.

Let and be two relations such that for any two entities and we have (i.e. is the inverse of ). This property of and can be encoded into SimplE by tying the parameters to and to .

Proof.

If , then a SimplE model makes and positive. By tying the parameters to and to , we can conclude that and also become positive. Therefore, the SimplE model predicts . ∎

5.3 Time Complexity and Parameter Growth

As described in Bordes et al. (2013a), to scale to the size of the current KGs and keep up with their growth, a relational model must have a linear time and memory complexity. Furthermore, one of the important challenges in designing tensor factorization models is the trade-off between expressivity and model complexity. Models with many parameters usually overfit and give poor performance. While the time complexity for TransE is where is the size of the embedding vectors, adding the projections as in STransE (through the two relation matrices) increases the time complexity to . Besides time complexity, the number of parameters to be learned from data grows quadratically with . A quadratic time complexity and parameter growth may arise two issues: 1- scalability problems, 2- overfitting. Same issues exist for models such as RESCAL and NTNs that have quadratic or higher time complexities and parameter growths. DistMult and ComplEx have linear time complexities and the number of their parameters grow linearly with .

The time complexity of both SimplE-ignr and SimplE is , i.e. linear in the size of vector embeddings. SimplE-ignr requires one multiplication between three vectors for each triple. This number is for SimplE and for ComplEx. Thus, with the same number of parameters, SimplE-ignr and SimplE reduce the computations by a factor of and respectively compared to ComplEx.

5.4 Family of Bilinear Models

Bilinear models correspond to the family of models where the embedding for each entity is , for each relation is (with certain restrictions), and the similarity function for a triple is defined as . These models have shown remarkable performance for link prediction in knowledge graphs Nickel et al. (2016a). DistMult, ComplEx, and RESCAL are known to belong to the family of bilinear models. We show that SimplE (and CP) also belong to this family.

DistMult can be considered a bilinear model which restricts the matrices to be diagonal as in Fig. 2(a). For ComplEx, if we consider the embedding for each entity to be a single vector , then it can be considered a bilinear model with its matrices constrained according to Fig. 2(b). RESCAL can be considered a bilinear model which imposes no constraints on the matrices. Considering the embedding for each entity to be a single vector , CP can be viewed as a bilinear model with its matrices constrained as in Fig 2(c). For a triple , multiplying to results in a vector whose first half is zero and whose second half corresponds to an element-wise product of to the parameters in . Multiplying to corresponds to ignoring (since the first half of is zeros) and taking the dot-product of the second half of with . SimplE can be viewed as a bilinear model similar to CP except that the matrices are constrained as in Fig 2(d). The extra parameters added to the matrix compared to CP correspond to the parameters in the inverse of the relations.

The constraint over matrices in SimplE is very similar to the constraint in DistMult. in both SimplE and DistMult can be considered as an element-wise product of the parameters, except that the s in SimplE swap the first and second halves of the resulting vector. Compared to ComplEx, SimplE removes the parameters on the main diagonal of s. Note that several other restrictions on the matrices are equivalent to SimplE, e.g., restricting

matrices to be zero everywhere except on the counterdiagonal. Viewing SimplE as a single-vector-per-entity model makes it easily integrable (or compatible) with other embedding models (in knowledge graph completion, computer vision and natural language processing) such as

Santoro et al. (2017); Zhang et al. (2017); Schlichtkrull et al. (2018).

Figure 2: The constraints over matrices for bilinear models (a) DistMult, (b) ComplEx, (c) CP, and (d) SimplE. The lines represent where the parameters are; other elements of the matrices are constrained to be zero. In ComplEx, the parameters represented by the dashed line is tied to the parameters represented by the solid line and the parameters represented by the dotted line is tied to the negative of the dotted-and-dashed line.

5.5 Redundancy in ComplEx

As argued earlier, with the same number of parameters, the number of computations in ComplEx are 4x and 2x more than SimplE-ignr and SimplE. Here we show that a portion of the computations performed by ComplEx to make predictions is redundant. Consider a ComplEx model with embedding vectors of size (for ease of exposition). Suppose the embedding vectors for , and are , , and respectively. Then the probability of being correct according to ComplEx is proportional to the sum of the following four terms: , , , and . It can be verified that for any assignment of (non-zero) values to s and s, at least one of the above terms is negative. This means for a correct triple, ComplEx uses three terms to overestimate its score and then uses a term to cancel the overestimation.

The following example shows how this redundancy in ComplEx may affect its interpretability:

Example 2.

Consider a ComplEx model with embeddings of size . Consider entities , and with embedding vectors , , and respectively, and a relation with embedding vector . According to ComplEx, the score for triple is positive suggesting probably has relation with . However the score for triple is negative suggesting probably does not have relation with . Since the only difference between and is that the imaginary part changes from to , it is difficult to associate a meaning to these numbers.

6 Experiments and Results

Datasets: We conducted experiments on two standard benchmarks: WN18 a subset of Wordnet Miller (1995), and FB15k a subset of Freebase Bollacker et al. (2008). We used the same train/valid/test sets as in Bordes et al. (2013b). WN18 contains entities, relations, train, validation and test triples. FB15k contains entities, relations, train, validation, and test triples.

Baselines: We compare SimplE with several existing tensor factorization approaches. Our baselines include canonical Polyadic (CP) decomposition, TransE, TransR, DistMult, NTN, STransE, ER-MLP, and ComplEx. Given that we use the same data splits and objective function as ComplEx, we report the results of CP, TransE, DistMult, and ComplEx from Trouillon et al. (2016). We report the results of TransR and NTN from Nguyen (2017), and ER-MLP from Nickel et al. (2016b) for further comparison.

Evaluation Metrics: To measure and compare the performances of different models, for each test triple we compute the score of triples for all and calculate the ranking of the triple having , and we compute the score of triples for all and calculate the ranking of the triple having . Then we compute the mean reciprocal rank (MRR) of these rankings as the mean of the inverse of the rankings: , where represents the test triples. MRR is a more robust measure than mean rank, since a single bad ranking can largely influence mean rank.

Bordes et al. (2013b) identified an issue with the above procedure for calculating the MRR (hereafter referred to as raw MRR). For a test triple , since there can be several entities for which holds, measuring the quality of a model based on its ranking for may be flawed. That is because two models may rank the test triple to be second, when the first model ranks a correct triple (e.g., from train or validation set) to be first and the second model ranks an incorrect triple to be first. Both these models will get the same score for this test triple when the first model should get a higher score. To address this issue, Bordes et al. (2013b) proposed a modification to raw MRR. For each test triple , instead of finding the rank of this triple among triples for all (or for all ), they proposed to calculate the rank among triples only for such that . Following Bordes et al. (2013b), we call this measure filtered MRR. We also report measures. The for a model is computed as the percentage of test triples whose ranking (computed as described earlier) is less than or equal .

Implementation:

We implemented SimplE in TensorFlow

Abadi et al. (2016). We tuned our hyper-parameters over the validation set. We used the same search grid on embedding size and as Trouillon et al. (2016) to make our results directly comparable to their results. We fixed the maximum number of iterations to and the batch size to . We set the learning rate for WN18 to and for FB15k to and used adagrad to update the learning rate after each batch. Following Trouillon et al. (2016), we generated one negative example per positive example for WN18 and negative examples per positive example in FB15k. We computed the filtered MRR of our model over the validation set every iterations for WN18 and every iterations for and selected the iteration that resulted in the best validation filtered MRR. The best embedding size and values on WN18 for SimplE-ignr were and respectively, and for SimplE were and . The best embedding size and values on FB15k for SimplE-ignr were and respectively, and for SimplE were and .

WN18 FB15k
MRR Hit@ MRR Hit@
Model Filter Raw 1 3 10 Filter Raw 1 3 10
CP
TransE
TransR
DistMult
NTN
STransE
ER-MLP
ComplEx
SimplE-ignr
SimplE
Table 1: Results on WN18 and FB15k. Best results are in bold.

6.1 Entity Prediction Results

Table 1 shows the results of our experiments. It can be viewed that both SimplE-ignr and SimplE do a good job compared to the existing baselines on both datasets. On WN18, SimplE-ignr and SimplE perform as good as ComplEx, a state-of-the-art tensor factorization model. On FB15k, SimplE outperforms the existing baselines and gives state-of-the-art results among tensor factorization approaches. SimplE (and SimplE-ignr) work especially well on this dataset in terms of filtered MRR and hit@1, so SimplE tends to do well at having its first prediction being correct.

The table shows that models with many parameters (e.g., NTN and STransE) do not perform well on these datasets, as they probably overfit. Translational approaches generally have an inferior performance compared to other approaches partly due to their representation restrictions mentioned in Proposition 2. As an example for the friendship relation in FB15k, if an entity is friends with other entities and another entity is friends with only one of those 20, then according to Proposition 2 translational approaches force to be friends with the other 19 entities as well (same goes for, e.g., netflix genre in FB15k and has part in WN18). The table also shows that bilinear approaches tend to have better performances compared to translational and deep learning approaches. Even DistMult, the simplest bilinear approach, outperforms many translational and deep learning approaches despite not being fully expressive. We believe the simplicity of embeddings and the scoring function is a key property for the success of SimplE.

6.2 Incorporating background knowledge

When background knowledge is available, we might expect that a knowledge graph might not include redundant information because it is implied by background knowledge and so the methods that do not include the background knowledge can never learn it. In section 5.2, we showed how background knowledge that can be formulated in terms of three types of rules can be incorporated into SimplE embeddings. To test this empirically, we conducted an experiment on WN18 in which we incorporated several such rules into the embeddings as outlined in Propositions 34, and 5. The rules can be found in Table 2. As can be viewed in Table 2, most of the rules are of the form . For (possibly identical) relations such as and participating in such a rule, if both and are in the training set, one of them is redundant because one can be inferred from the other. We removed redundant triples from the training set by randomly removing one of the two triples in the training set that could be inferred from the other one based on the background rules. Removing redundant triples reduced the number of triples in the training set from (approximately) to (approximately) , almost reduction in size. Note that this experiment provides an upper bound on how much background knowledge can improve the performance of a SimplE model.

We trained SimplE-ignr and SimplE (with tied parameters according to the rules) on this new training dataset with the best hyper-parameters found in the previous experiment. We refer to these two models as SimplE-ignr-bk and SimplE-bk. We also trained another SimplE-ignr and SimplE models on this dataset, but without incorporating the rules into the embeddings. For sanity check, we also trained a ComplEx model over this new dataset. We found that the filtered MRR for SimplE-ignr, SimplE, and ComplEx were respectively , , and . For SimplE-ignr-bk and SimplE-bk, the filtered MRRs were and respectively, substantially higher than the case without background knowledge. In terms of measures, SimplE-ignr gave , , and for , and respectively. These numbers were , , and for SimplE, and , and for ComplEx. For SimplE-ignr-bk, these numbers were , and and for SimplE-bk they were , and , also substantially higher than the models without background knowledge. The obtained results validate that background knowledge can be effectively incorporated into SimplE embeddings to improve its performance.

Rule Number Rule
1
2
3
4
5
6
7
8
Table 2: Background Knowledge Used in Section 6.2.

7 Conclusion

We proposed a simple interpretable fully expressive bilinear model for knowledge graph completion. We showed that our model, called SimplE, performs very well empirically and has several interesting properties. For instance, three types of background knowledge can be incorporated into SimplE by tying the embeddings. In future, SimplE could be improved or may help improve relational learning in several ways including: 1- building ensembles of SimplE models as Kadlec et al. (2017) do it for DistMult, 2- adding SimplE to the relation-level ensembles of Wang et al. (2018), 3- explicitly modelling the analogical structures of relations as in Liu et al. (2018), 4- using Dettmers et al. (2018)’s 1-N scoring approach to generate many negative triples for a positive triple (Trouillon et al. (2016) show that generating more negative triples improves accuracy), 5- combining SimplE with logic-based approaches (e.g., with Kazemi and Poole (2018)) to improve property prediction, 6- combining SimplE with (or use SimplE as a sub-component in) techniques from other categories of relational learning as Rocktäschel and Riedel (2017) do with ComplEx, 7- incorporating other types of background knowledge (e.g., entailment) into SimplE embeddings.

References

  • Abadi et al. (2016) Martın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
  • Bollacker et al. (2008) Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In ACM SIGMOD, pages 1247–1250. AcM, 2008.
  • Bordes et al. (2013a) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Irreflexive and hierarchical relations as translations. arXiv preprint arXiv:1304.7158, 2013.
  • Bordes et al. (2013b) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In NIPS, pages 2787–2795, 2013.
  • Cybenko (1989) George Cybenko.

    Approximations by superpositions of a sigmoidal function.

    Mathematics of Control, Signals and Systems, 2:183–192, 1989.
  • Das et al. (2017) Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, and Andrew McCallum.

    Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning.

    NIPS Workshop on AKBC, 2017.
  • De Raedt et al. (2016) Luc De Raedt, Kristian Kersting, Sriraam Natarajan, and David Poole.

    Statistical relational artificial intelligence: Logic, probability, and computation.

    Synthesis Lectures on Artificial Intelligence and Machine Learning

    , 10(2):1–189, 2016.
  • Dettmers et al. (2018) Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. In AAAI, 2018.
  • Ding et al. (2018) Boyang Ding, Quan Wang, Bin Wang, and Li Guo. Improving knowledge graph embedding using simple constraints. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018.
  • Dong et al. (2014) Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In ACM SIGKDD, pages 601–610. ACM, 2014.
  • Feng et al. (2016) Jun Feng, Minlie Huang, Mingdong Wang, Mantong Zhou, Yu Hao, and Xiaoyan Zhu. Knowledge graph embedding by flexible translation. In KR, pages 557–560, 2016.
  • Getoor and Taskar (2007) Lise Getoor and Ben Taskar. Introduction to statistical relational learning. MIT press, 2007.
  • Guo et al. (2016) Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. Jointly embedding knowledge graphs and logical rules. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 192–202, 2016.
  • Hayashi and Shimbo (2017) Katsuhiko Hayashi and Masashi Shimbo. On the equivalence of holographic and complex embeddings for link prediction. arXiv preprint arXiv:1702.05563, 2017.
  • Hitchcock (1927) Frank L Hitchcock. The expression of a tensor or a polyadic as a sum of products. Studies in Applied Mathematics, 6(1-4):164–189, 1927.
  • Hornik (1991) Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257, 1991.
  • Ji et al. (2015) Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. Knowledge graph embedding via dynamic mapping matrix. In ACL (1), pages 687–696, 2015.
  • Kadlec et al. (2017) Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. Knowledge base completion: Baselines strike back. arXiv preprint arXiv:1705.10744, 2017.
  • Kazemi and Poole (2018) Seyed Mehran Kazemi and David Poole. Relnn: A deep neural model for relational learning. In AAAI, 2018.
  • Lao and Cohen (2010) Ni Lao and William W Cohen. Relational retrieval using a combination of path-constrained random walks. Machine learning, 81(1):53–67, 2010.
  • Lin et al. (2015a) Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. Modeling relation paths for representation learning of knowledge bases. EMNLP, 2015.
  • Lin et al. (2015b) Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. Learning entity and relation embeddings for knowledge graph completion. In AAAI, pages 2181–2187, 2015.
  • Liu et al. (2018) Hanxiao Liu, Yuexin Wu, and Yiming Yang. Analogical inference for multi-relational embeddings. AAAI, 2018.
  • Miller (1995) George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41, 1995.
  • Minervini et al. (2017) Pasquale Minervini, Luca Costabello, Emir Muñoz, Vít Nováček, and Pierre-Yves Vandenbussche. Regularizing knowledge graph embeddings via equivalence and inversion axioms. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 668–683. Springer, 2017.
  • Nguyen et al. (2016) Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. Stranse: a novel embedding model of entities and relationships in knowledge bases. In NAACL-HLT, 2016.
  • Nguyen (2017) Dat Quoc Nguyen. An overview of embedding models of entities and relationships for knowledge base completion. arXiv preprint arXiv:1703.08098, 2017.
  • Nickel et al. (2011) Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In ICML, volume 11, pages 809–816, 2011.
  • Nickel et al. (2012) Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. Factorizing yago: scalable machine learning for linked data. In World Wide Web, pages 271–280. ACM, 2012.
  • Nickel et al. (2014) Maximilian Nickel, Xueyan Jiang, and Volker Tresp. Reducing the rank in relational factorization models by including observable patterns. In NIPS, pages 1179–1187, 2014.
  • Nickel et al. (2016a) Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11–33, 2016.
  • Nickel et al. (2016b) Maximilian Nickel, Lorenzo Rosasco, Tomaso A Poggio, et al. Holographic embeddings of knowledge graphs. In AAAI, pages 1955–1961, 2016.
  • Rocktäschel and Riedel (2017) Tim Rocktäschel and Sebastian Riedel. End-to-end differentiable proving. In NIPS, pages 3791–3803, 2017.
  • Rocktäschel et al. (2014) Tim Rocktäschel, Matko Bošnjak, Sameer Singh, and Sebastian Riedel. Low-dimensional embeddings of logic. In Proceedings of the ACL 2014 Workshop on Semantic Parsing, pages 45–49, 2014.
  • Santoro et al. (2017) Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Tim Lillicrap. A simple neural network module for relational reasoning. In NIPS, 2017.
  • Schlichtkrull et al. (2018) Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pages 593–607. Springer, 2018.
  • Socher et al. (2013) Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensor networks for knowledge base completion. In NIPS, 2013.
  • Trouillon and Nickel (2017) Théo Trouillon and Maximilian Nickel. Complex and holographic embeddings of knowledge graphs: a comparison. arXiv preprint arXiv:1707.01475, 2017.
  • Trouillon et al. (2016) Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In ICML, pages 2071–2080, 2016.
  • Trouillon et al. (2017) Théo Trouillon, Christopher R Dance, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Knowledge graph completion via complex tensor factorization. arXiv preprint arXiv:1702.06879, 2017.
  • Wang et al. (2014) Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen.

    Knowledge graph embedding by translating on hyperplanes.

    In AAAI, pages 1112–1119, 2014.
  • Wang et al. (2015) Quan Wang, Bin Wang, Li Guo, et al. Knowledge base completion using embeddings and rules. In International Joint Conference on Artificial Intelligence, pages 1859–1866, 2015.
  • Wang et al. (2017) Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–2743, 2017.
  • Wang et al. (2018) Yanjie Wang, Rainer Gemulla, and Hui Li. On multi-relational link prediction with bilinear models. AAAI, 2018.
  • Wei et al. (2015) Zhuoyu Wei, Jun Zhao, Kang Liu, Zhenyu Qi, Zhengya Sun, and Guanhua Tian. Large-scale knowledge base completion: Inferring via grounding network sampling over selected instances. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 1331–1340. ACM, 2015.
  • Yang et al. (2015) Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. ICLR, 2015.
  • Zhang et al. (2017) Hanwang Zhang, Zawlin Kyaw, Shih-Fu Chang, and Tat-Seng Chua. Visual translation embedding network for visual relation detection. In CVPR, volume 1, page 5, 2017.