Log In Sign Up

Canonical Tensor Decomposition for Knowledge Base Completion

by   Timothée Lacroix, et al.

The problem of Knowledge Base Completion can be framed as a 3rd-order binary tensor completion problem. In this light, the Canonical Tensor Decomposition (CP) (Hitchcock, 1927) seems like a natural solution; however, current implementations of CP on standard Knowledge Base Completion benchmarks are lagging behind their competitors. In this work, we attempt to understand the limits of CP for knowledge base completion. First, we motivate and test a novel regularizer, based on tensor nuclear p-norms. Then, we present a reformulation of the problem that makes it invariant to arbitrary choices in the inclusion of predicates or their reciprocals in the dataset. These two methods combined allow us to beat the current state of the art on several datasets with a CP decomposition, and obtain even better results using the more advanced ComplEx model.


page 1

page 2

page 3

page 4


On Evaluating Embedding Models for Knowledge Base Completion

Knowledge bases contribute to many artificial intelligence tasks, yet th...

Do Embedding Models Perform Well for Knowledge Base Completion?

In this work, we put into question the effectiveness of the evaluation m...

Tensor Decompositions for temporal knowledge base completion

Most algorithms for representation learning and link prediction in relat...

Knowledge Base Completion: Baseline strikes back (Again)

Knowledge Base Completion has been a very active area recently, where mu...

Streaming Generalized Canonical Polyadic Tensor Decompositions

In this paper, we develop a method which we call OnlineGCP for computing...

Joint Matrix-Tensor Factorization for Knowledge Base Inference

While several matrix factorization (MF) and tensor factorization (TF) mo...

Knowledge Base Completion: Baselines Strike Back

Many papers have been published on the knowledge base completion task in...

1 Introduction

In knowledge base completion, the learner is given triples (subject, predicate, object) of facts about the world, and has to infer new triples that are likely but not yet known to be true. This problem has attracted a lot of attention (Nickel et al., 2016a; Nguyen, 2017) both as an example application of large-scale tensor factorization, and as a benchmark of learning representations of relational data.

The standard completion task is link prediction, which consists in answering queries (subject, predicate, ?) or (?, predicate, object). In that context, the canonical decomposition of tensors (also called CANDECOMP/PARAFAC or CP) (Hitchcock, 1927) is known to perform poorly compared to more specialized methods. For instance, DistMult (Yang et al., 2014), a particular case of CP which shares the factors for the subject and object modes, was recently shown to have state-of-the-art results (Kadlec et al., 2017). This result is surprising because DistMult learns a tensor that is symmetric in the subject and object modes, while the datasets contain mostly non-symmetric predicates.

The goal of this paper is to study whether and how CP can perform as well as its competitors. To that end, we evaluate three possibilities.

First, as Kadlec et al. (2017)

showed that performances for these tasks are sensitive to the loss function and optimization parameters, we re-evaluate CP with a broader parameter search and a multiclass log-loss.

Second, since the best performing approaches are less expressive than CP, we evaluate whether regularization helps. On this subject, we show that the standard regularization used in knowledge base completion does not correspond to regularization with a tensor norm. We then propose to use tensor nuclear -norms (Friedland & Lim, 2018), with the goal of designing more principled regularizers.

Third, we propose a different formulation of the objective, in which we model separately predicates and their inverse: for each predicate , we create an inverse predicate and create a triple for each training triple . At test time, queries of the form are answered as . Similar formulations were previously used by Shen et al. (2016) and Joulin et al. (2017), but for different models for which there was no clear alternative, so the impact of this reformulation has never been evaluated.

To assess whether the results we obtain are specific to CP, we also carry on the same experiments with a state-of-the-art model, ComplEx (Trouillon et al., 2016). ComplEx has the same expressivity as CP in the sense that it can represent any tensor, but it implements a specific form of parameter sharing. We perform all our experiments on common benchmark datasets of link prediction in knowledge bases.

Our results first confirm that within a reasonable time budget, the performance of both CP and ComplEx are highly dependent on optimization parameters. With systematic parameter searches, we obtain better results for ComplEx than what was previously reported, confirming its status as a state-of-the-art model on all datasets. For CP, the results are still way below its competitors.

Learning and predicting with the inverse predicates, however, changes the picture entirely. First, with both CP and ComplEx, we obtain significant gains in performance on all the datasets. More precisely, we obtain state-of-the-art results with CP, matching those of ComplEx. For instance, on the benchmark dataset FB15K (Bordes et al., 2013), the mean reciprocal rank of vanilla CP and vanilla ComplEx are and respectively, and it grows to for both approaches when modeling the inverse predicates.

Finally, the new regularizer we propose based on the nuclear -norm, does not dramatically help CP, which leads us to believe that a careful choice of regularization is not crucial for these CP models. Yet, for both CP and ComplEx with inverse predicates, it yields small but significant improvements on the more difficult datasets.

2 Tensor Factorization of Knowledge Bases

Figure 1:

(a) On the left, the link between the score of a triple (i,j,k) and the tensor estimated via CP. (b) In the middle, the two type of fiber losses that we will consider. (c) On the right, our semantically invariant reformulation, the first-mode fibers become third-mode fibers of the reciprocal half of the tensor.

We describe in this section the formal framework we consider for knowledge base completion and more generally link prediction in relational data, the learning criteria, as well as the approaches that we will discuss.

2.1 Link Prediction in Relational Data

We consider relational data that comes in the form of triples (subject, predicate, object), where the subject and the object are from the same set of entities. In knowledge bases, these triples represent facts about entities of the world, such as . A training set contains triples of indices that represent predicates that are known to hold. The validation and test sets contain queries of the form and , created from triples that are known to hold but held-out from the training set. To give orders of magnitude, the largest datasets we experiment on, FB15K and YAGO3-10, contain respectively and entities/predicates.

2.2 Tensor Decomposition for Link Prediction

Relational data can be represented as a -valued third order tensor , where is the total number of entities and the number of predicates, with if the relation is known. In the rest of the paper, the three modes will be called the subject mode, the predicate mode and the object mode respectively. Tensor factorization algorithms can thus be used to infer a predicted tensor that approximates in a sense that we describe in the next subsection. Validation/test queries are answered by ordering entities  by decreasing values of , whereas queries are answered by ordering entities  by decreasing values of .

Several approaches have considered link prediction as a low-rank tensor decomposition problem. These models then differ only by structural constraints on the learned tensor. Three models of interest are:


The canonical decomposition of tensors, also called CANDECOM/PARAFAC (Hitchcock, 1927), represents a tensor as a sum of rank one tensors (with the tensor product) where , and :

A representation of this decomposition, and the score of a specific triple is given in Figure 1 (a). Given , the smallest for which this decomposition holds is called the canonical rank of .


In the more specific context of link prediction, it has been suggested in Bordes et al. (2011); Nickel et al. (2011) that since both subject and object mode represent the same entities, they should have the same factors. DistMult (Yang et al., 2014) is a version of CP with this additional constraint. It represents a tensor as a sum of rank- tensors :


By contrast with the first models that proposed to share the subject and object mode factors, DistMult yields a tensor that is symmetric in the object and subject modes. The assumption that the data tensor can be properly approximated by a symmetric tensor for Knowledge base completion is not satisfied in many practical cases (e.g., while holds, does not). ComplEx (Trouillon et al., 2016) proposes an alternative where the subject and object modes share the parameters of the factors, but are complex conjugate of each other. More precisely, this approach represents a real-valued tensor as the real part of a sum of complex-valued rank one tensors where , and

where is the complex conjugate of . This decomposition can represent any real tensor (Trouillon et al., 2016).

The good performances of DistMult on notoriously non-symmetric datasets such as FB15K or WN18 are surprising. First, let us note that for the symmetricity to become an issue, one would have to evaluate queries while also trying to answer correctly to queries of the form for a non-symmetric predicate . The ranking for these two queries would be identical, and thus, we can expect issues with relations such as . In FB15K, those type of problematic queries make up only of the test set and thus, have a small impact. On WN18 however, they make up of the test set. We describe in appendix 8.1 a simple strategy for DistMult to have a high filtered MRR on the hierarchical predicates of WN18 despite its symmetricity assumption.

2.3 Training

Previous work suggested ranking losses (Bordes et al., 2013)

, binary logistic regression

(Trouillon et al., 2016) or sampled multiclass log-loss (Kadlec et al., 2017). Motivated by the solid results in Joulin et al. (2017)

, our own experimental results, and with a satisfactory speed of about two minutes per epoch on FB15K, we decided to use the full multiclass log-loss.

Given a training triple and a predicted tensor , the instantaneous multi-class log-loss is


These two partial losses are represented in Figure 1 (b). For CP, the final tensor is computed by finding a minimizer of a regularized empirical risk formulation, where the factors are weighted in a data-dependent manner by , which we describe below:



is the entry-wise multiplication of vectors. For DistMult and ComplEx, the learning objective is similar, up to the appropriate parameter sharing and computation of the tensor.

As discussed in Section 3.2, the weights

may improve performances when some rows/columns are sampled more than others. They appear naturally in optimization with stochastic gradient descent when the regularizer is applied only to the parameters that are involved in the computation of the instantaneous loss. For instance, in the case of the logistic loss with negative sampling used by

Trouillon et al. (2016), denoting by

the marginal probability (over

) that index appears in mode of a data triple, these weights are for some that depends on the negative sampling scheme.

We focus on redefining the loss (1) and the regularizer (2.3).

3 Related Work

We discuss here in more details the work that has been done on link prediction in relational data and on regularizers for tensor completion.

3.1 Link Prediction in Relational Data

There has been extensive research on link prediction in relational data, especially in knowledge bases, and we review here only the prior work that is most relevant to this paper. While some approaches explicitly use the graph structure during inference (Lao et al., 2011), we focus here on representation learning and tensor factorization methods, which are the state-of-the-art on the benchmark datasets we use. We also restrict the discussion to approaches that only use relational information, even though some approaches have been proposed to leverage additional types (Krompass et al., 2015; Ma et al., 2017) or external word embeddings (Toutanova & Chen, 2015).

We can divide the first type of approaches into two broad categories. First, two-way approaches score a triple depending only on bigram interaction terms of the form subject-object, subject-predicate, and predicate-object. Even though they are tensor approximation algorithms of limited expressivity, two-way models based on translations TransE, or on bag-of-word representations (Joulin et al., 2017) have proved competitive on many benchmarks. Yet, methods using three-way multiplicative interactions, as described in the previous section, show the strongest performances (Bordes et al., 2011; Garcia-Duran et al., 2016; Nickel et al., 2016b; Trouillon et al., 2016). Compared to general-purpose tensor factorization methods such as CP, a common feature of these approaches is to share parameters between objects and subjects modes (Nickel et al., 2011), an idea that has been widely accepted except for the two-way model of Joulin et al. (2017). DistMult (Yang et al., 2014) is the extreme case of this parameter sharing, in which the predicted tensor is symmetric in the subject and object modes.

3.2 Regularization for Matrix Completion

Norm-based regularization has been extensively studied in the context of matrix completion. The trace norm (or nuclear norm) has been proposed as a convex relaxation of the rank (Srebro et al., 2005) for matrix completion in the setting of rating prediction, with strong theoretical guarantees (Candès & Recht, 2009). While efficient algorithms to solve the convex problems have been proposed (see e.g. Cai et al., 2010; Jaggi et al., 2010), the practice is still to use the matrix equivalent of the nonconvex formulation (2.3). For the trace norm (nuclear -norm), in the matrix case, the regularizer simply becomes the squared -norm of the factors and lends itself to alternating methods or SGD optimization (Rennie & Srebro, 2005; Koren et al., 2009). When the samples are not taken uniformly at random from a matrix, some other norms are preferable to the usual nuclear norm. The weighted trace norm reweights elements of the factors based on the marginal rows and columns sampling probabilities, which can improve sample complexity bounds when sampling is non-uniform (Foygel et al., 2011; Negahban & Wainwright, 2012). Direct SGD implementations on the nonconvex formulation implicitly take this reweighting rule into account and were used by the winners of the Netflix challenge (see Srebro & Salakhutdinov, 2010, Section 5).

3.3 Tensor Completion and Decompositions

There is a large body of literature on low-rank tensor decompositions (see Kolda & Bader, 2009, for a comprehensive review). Closely related to our work is the canonical decomposition of tensor (also called CANDECOMP/PARAFAC or CP) (Hitchcock, 1927), which solves a problem similar to (14) without the regularization (i.e., ), and usually the square loss.

Several norm-based regularizations for tensors have been proposed. Some are based on unfolding a tensor along each of its modes to obtain matricizations, and either regularize by the sum of trace norms of the matricizations (Tomioka et al., 2010) or write the original tensor as a sum of tensors , regularizing their respective th matricizations with the trace norm  (Wimalawarne et al., 2014). However, in the large-scale setting, even rank-1 approximations of matricizations involve too many parameters to be tractable.

Recently, the tensor trace norm (nuclear -norm) was proposed as a regularizer for tensor completion Yuan & Zhang (2016), and an algorithm based on the generalized conditional gradient has been developed by Cheng et al. (2016). This algorithm requires, in an inner loop, to compute a (constrained) rank-1 tensor that has largest dot-product with the gradient of the data-fitting term (gradient w.r.t. the tensor argument). This algorithm is efficient in our setup only with the square error loss (instead of the multiclass log-loss), because the gradient is then a low-rank + sparse tensor when the argument is low-rank. However, on large-scale knowledge bases, the state of the art is to use a binary log-loss or a multiclass log-loss (Trouillon et al., 2016; Kadlec et al., 2017); in that case, the gradient is not adequately structured, thereby causing the approach of (Cheng et al., 2016) to be too computationally costly.

4 Nuclear -Norm Regularization

As discussed in Section 3, norm-based regularizers have proved useful for matrices. We aim to reproduce these successes with tensor norms. We use the nuclear -norms defined by Friedland & Lim (2018). As shown in Equation (2.3), the community has favored so far a regularizer based on the square Frobenius norms of the factors (Yang et al., 2014; Trouillon et al., 2016). We first show that the unweighted version of this regularizer is not a tensor norm. Then, we propose4 a variational form of the nuclear -norm to replace the usual regularization at no additional computational cost when used with SGD. Finally, we discuss a weighting scheme analogous to the weighted trace-norm proposed in Srebro & Salakhutdinov (2010).

4.1 From Matrix Trace-Norm to Tensor Nuclear Norms

To simplify notation, let us introduce the set of CP decompositions of a tensor of rank at most :


We will study the family of regularizers:


Note that with , we recover the familiar squared Frobenius norm regularizer used in (2.3). Similar to showing that the squared Frobenius norm is a variational form of the trace norm on matrices (i.e., its minimizers realize the trace norm, ), we start with a technical lemma that links our regularizer with a function on the spectrum of our decompositions.

Lemma 1.

Moreover, the minimizers of the left-hand side satisfy:


See Appendix 8.2. ∎

This Lemma motivates the introduction of the set of -norm normalized tensor decompositions:


Lemma 20, shows that behaves as an penalty over the CP spectrum for tensors of order . We recover the nuclear norm for matrices when .

Using Lemma 20, we have :


We show that the sub-level sets of the term on the right are not convex, which implies that is not the variational form of a tensor norm, and hence, is not the tensor analog to the matrix trace norm.

Proposition 1.

The function over third order-tensors of defined as

is not convex.


See Appendix 8.2. ∎

Remark 1.

Cheng et al. (2016, Appendix D) already showed that regularizing with the square Frobenius norm of the factors is not related to the trace norm for tensors of order and above, but their observation is that the regularizer is not positively homogeneous, i.e., . Our result in Proposition 1 is stronger in that we show that this regularizer is not a norm even after the rescaling (11) to make it homogeneous.

The nuclear -norm of for , is defined in Friedland & Lim (2018) as


Given an estimated upper bound on the optimal , the original problem (2.3) can then be re-written as a non-convex problem using the equivalence in Lemma 20:


This variational form suggests to use , as a means to make the regularizer separable in each coefficients, given that then .

4.2 Weighted Nuclear -Norm

Similar to the weighted trace-norm for matrices, the weighted nuclear -norm can be easily implemented by keeping the regularization terms corresponding to the sampled triplets only, as discussed in Section 3.2. This leads to a formulation of the form


For an example , only the parameters involved in the computation of are regularized. The computational complexity is thus the same as the currently used weighted Frobenius norm regularizer. With (resp. , ) the marginal probabilities of sampling a subject (resp. predicate, object), the weighting implied by this regularization scheme is

We justify this weighting only by analogy with the matrix case discussed by (Srebro & Salakhutdinov, 2010): to make the weighted nuclear -norm of the all tensor independent of its dimensions for a uniform sampling (since the nuclear -norm grows as for an tensor).

Comparatively, for the weighted version of the nuclear -norm analyzed in Yuan & Zhang (2016), the nuclear -norm of the all tensor scales like . This would imply a formulation of the form


Contrary to formulation (15), the optimization of formulation (16) with a minibatch SGD leads to an update of every coefficients for each mini-batch considered. Depending on the implementation, and size of the factors, there might be a large difference in speed between the updates of the weighted nuclear -norm. In our implementation, this difference for CP is of about in favor of the nuclear -norm on FB15K.

5 A New CP Objective

Since our evaluation objective is to rank either the left-hand side or right-hand side of the predicates in our dataset, what we are trying to achieve is to model both predicates and their reciprocal. This suggests appending to our input the reciprocals of each predicates, thus factorizing rather than , where is the mode-2 concatenation, and . After that, we only need to model the object fibers of this new tensor . We represent this transformation in Figure 1 (c). This reformulation has an important side-effect: it makes our algorithm invariant to the arbitrary choice of including a predicate or its reciprocal in the dataset. This property was introduced as "Semantic Invariance" in Bailly et al. (2015). Another way of achieving this invariance property would be to find the flipping of predicates that lead to the smallest model. In the case of a CP decomposition, we would try to find the flipping that leads to lowest tensor rank. This seems hopeless, given the NP-hardness of computing the tensor rank.

More precisely, the instantaneous loss of a training triple becomes :


At test time we use to rank possible right hand sides for query and to rank possible left hand sides for query .

Using CP to factor the tensor described in (17), we beat the previous state of the art on many benchmarks, as shown in Table 2. This reformulation seems to help even the ComplEx decomposition, for which parameters are shared between the entity embeddings of the first and third mode.

Dataset N P Train Valid Test
WN18 k k k k
WN18RR k k k k
FB15K k 1k k k k
FB15K-237 k k k k
YAGO3-10 k M k k
Table 1: Dataset statistics.

6 Experiments

We conducted all experiments on a Quadro GP 100 GPU. The code is available at

Model WN18 WN18RR FB15K FB15K-237 YAGO3-10
MRR H@10 MRR H@10 MRR H@10 MRR H@10 MRR H@10


CP - - - - - -
Best Published




Table 2: Results taken as best from Dettmers et al. (2017) and Kadlec et al. (2017). Results taken as best from Dettmers et al. (2017) and Trouillon et al. (2016). We give the origin of each result on the Best Published row in appendix.

6.1 Datasets and Experimental Setup

WN18 and FB15K are popular benchmarks in the Knowledge Base Completion community. The former comes from the WordNet database, was introduced in Bordes et al. (2014) and describes relations between words. The most frequent types of relations are highly hierarchical (e.g., hypernym, hyponym). The latter is a subsampling of Freebase limited to k entities, introduced in Bordes et al. (2013). It contains predicates with different characteristics (e.g., one-to-one relations such as capital_of to many-to-many such as actor_in_film).

Toutanova & Chen (2015) and Dettmers et al. (2017) identified train to test leakage in both these datasets, in the form of test triplets, present in the train set for the reciprocal predicates. Thus, both of these authors created two modified datasets: FB15K-237 and WN18RR. These datasets are harder to fit, so we expect regularization to have more impact. Dettmers et al. (2017) also introduced the dataset YAGO3-10, which is larger in scale and doesn’t suffer from leakage. All datasets statistics are shown in Table 1.

In all our experiments, we distinguish two settings: Reciprocal, in which we use the loss described in equation (17) and Standard, which uses the loss in equation (1). We compare our implementation of CP and ComplEx with the best published results, then the different performances between the two settings, and finally, the contribution of the regularizer in the reciprocal setting. In the Reciprocal setting, we compare the weighted nuclear -norm (N3) against the regularizer described in (2.3) (FRO). In preliminary experiments, the weighted nuclear -norm described in  (16) did not seem to perform better than N3 and was slightly slower. We used Adagrad (Duchi et al., 2011) as our optimizer, whereas Kadlec et al. (2017) favored Adam (Kingma & Ba, 2014), because preliminary experiments didn’t show improvements.

We ran the same grid for all algorithms and regularizers on the FB15K, FB15K-237, WN18, WN18RR datasets, with a rank set to for ComplEx, and for CP. Our grid consisted of two learning rates: and , two batch-sizes: and , and regularization coefficients in . On YAGO3-10, we limited our models to rank and used batch-sizes and , the rest of the grid was identical. We used the train/valid/test splits provided with these datasets and measured the filtered Mean Reciprocal Rank (MRR) and Hits@10 (Bordes et al. (2013)). We used the filtered MRR on the validation set for early stopping and report the corresponding test metrics. In this setting, an epoch for ComplEx with batch-size 100 on FB15K took about and for a batch-size of . We trained for epochs to ensure convergence, reported performances were reached within the first epochs.

All our results are reported in Table 2 and will be discussed in the next subsections. Besides our implementations of CP and ComplEx, we include the results of ConvE and DistMult in the baselines. The former because Dettmers et al. (2017) includes performances on the WN18RR and YAGO3-10 benchmarks, the latter because of the good performances on FB15K of DistMult and the extensive experiments on WN18 and FB15K reported in Kadlec et al. (2017). The performances of DistMult on FB15K-237, WN18RR and YAGO3-10 may be slightly underestimated, since our baseline CP results are better. To avoid clutter, we did not include in our table of results algorithms that make use of external data such as types (Krompass et al., 2015), external word embeddings (Toutanova & Chen, 2015), or using path queries as regularizers (Guu et al., 2015). The published results corresponding to these methods are subsumed in the "Best Published" line of Table 2, which is taken, for every single metric and dataset, as the best published result we were able to find.

6.2 Reimplementation of the Baselines

The performances of our reimplementation of CP and ComplEx appear in the middle rows of Table 2 (Standard setting). We only kept the results for the nuclear -norm, which didn’t seem to differ from those with the Frobenius norm. Our results are slightly better than their published counterparts, going from to filtered MRR on FB15K for CP and to for ComplEx. This might be explained in part by the fact that in the Standard setting (2.3) we use a multi-class log-loss, whereas Trouillon et al. (2016) used binomial negative log-likelihood. Another reason for this increase can be the large rank of that we chose, where previously published results used a rank of around ; the more extensive search for optimization/regularization parameters and the use of nuclear -norm instead of the usual regularization are also most likely part of the explanation.

6.3 Standard vs Reciprocal

In this section, we compare the effect of reformulation (17), that is, the middle and bottom rows of Table 2. The largest differences are obtained for CP, which becomes a state of the art contender going from to filtered MRR on WN18, or from to filtered MRR on FB15K.For ComplEx, we notice a weaker, but consistent improvement by using our reformulation, with the biggest improvements observed on FB15K and YAGO3-10. Following the analysis in Bordes et al. (2013), we show in Table 3 the average filtered MRR as a function of the degree of the predicates. We compute the average in and out degrees on the training set, and separate the predicates in categories : 1-1, 1-m, m-1 and m-m, with a cut-off at on the average degree. We include reciprocal predicates in these statistics. That is, a predicate with an average in-degree of and average out-degree of will count as a 1-m when we predict its right-hand side, and as an m-1 when we predict its left-hand side. Most of our improvements come from the 1-m and m-m categories, both on ComplEx and CP.

1-1 m-1 1-m m-m
CP Standard
CP Reciprocal
ComplEx Standard
ComplEx Reciprocal
Table 3: Average MRR per relation type on FB15K.

6.4 Frobenius vs Nuclear

We focus now on the effect of our norm-based N3 regularizer, compared to the Frobenius norm regularizer favored by the community. Comparing the four last rows of Table 2, we notice a small but consistent performance gain across datasets. The biggest improvements appear on the harder datasets WN18RR, FB15K-237 and YAGO3-10. We checked on WN18RR the significance of that gain with a Signed Rank test on the rank pairs for CP.

6.5 Effect of Optimization Parameters

During these experiments, we noticed a heavy influence of optimization hyper-parameters on final results. This influence can account for as much as filtered MRR and is illustrated in Figure 2.

7 Conclusion and Discussion

The main contribution of this paper is to isolate and systematically explore the effect of different factors for large-scale knowledge base completion. While the impact of optimization parameters was well known already, neither the effect of the formulation (adding reciprocals doubles the mean reciprocal rank on FB15K for CP) nor the impact of the regularization was properly assessed. The conclusion is that the CP model performs nearly as well as the competitors when each model is evaluated in its optimal configuration. We believe this observation is important to assess and prioritize directions for further research on the topic.

In addition, our proposal to use nuclear -norm as regularizers with for tensor factorization in general is of independent interest.

The results we present leave several questions open. Notably, whereas we give definite evidence that CP itself can perform extremely well on these datasets as long as the problem is formulated correctly, we do not have a strong theoretical justification as to why the differences in performances are so significant.

Figure 2: Effect of the batch-size on FB15K in the Standard (top) and Reciprocal (bottom) settings, other parameters being equal. The difference is large even after epochs and the effect is inverted in the two settings, making it hard to choose the batch-size a priori.


The authors thank Armand Joulin and Maximilian Nickel for valuable discussions.


  • Bailly et al. (2015) Bailly, R., Bordes, A., and Usunier, N. Semantically Invariant Tensor Factorization. 2015.
  • Bordes et al. (2011) Bordes, A., Weston, J., Collobert, R., and Bengio, Y. Learning structured embeddings of knowledge bases. In

    Conference on artificial intelligence

    , 2011.
  • Bordes et al. (2013) Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., and Yakhnenko, O. Translating Embeddings for Modeling Multi-relational Data. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 26, pp. 2787–2795. Curran Associates, Inc., 2013.
  • Bordes et al. (2014) Bordes, A., Glorot, X., Weston, J., and Bengio, Y. A semantic matching energy function for learning with multi-relational data. Machine Learning, 94(2):233–259, 2014.
  • Cai et al. (2010) Cai, J.-F., Candès, E. J., and Shen, Z.

    A singular value thresholding algorithm for matrix completion.

    SIAM Journal on Optimization, 20(4):1956–1982, 2010.
  • Candès & Recht (2009) Candès, E. J. and Recht, B. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717, 2009.
  • Cheng et al. (2016) Cheng, H., Yu, Y., Zhang, X., Xing, E., and Schuurmans, D. Scalable and sound low-rank tensor learning. In Artificial Intelligence and Statistics, pp. 1114–1123, 2016.
  • Dettmers et al. (2017) Dettmers, T., Minervini, P., Stenetorp, P., and Riedel, S. Convolutional 2d knowledge graph embeddings. arXiv preprint arXiv:1707.01476, 2017.
  • Duchi et al. (2011) Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.
  • Foygel et al. (2011) Foygel, R., Shamir, O., Srebro, N., and Salakhutdinov, R. R. Learning with the weighted trace-norm under arbitrary sampling distributions. In Advances in Neural Information Processing Systems, pp. 2133–2141, 2011.
  • Friedland & Lim (2018) Friedland, S. and Lim, L.-H. Nuclear norm of higher-order tensors. Mathematics of Computation, 87(311):1255–1281, 2018.
  • Garcia-Duran et al. (2016) Garcia-Duran, A., Bordes, A., Usunier, N., and Grandvalet, Y. Combining two and three-way embedding models for link prediction in knowledge bases. Journal of Artificial Intelligence Research, 55:715–742, 2016.
  • Guu et al. (2015) Guu, K., Miller, J., and Liang, P.

    Traversing knowledge graphs in vector space.


    Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

    , pp. 318–327, 2015.
  • Hitchcock (1927) Hitchcock, F. L. The expression of a tensor or a polyadic as a sum of products. Studies in Applied Mathematics, 6(1-4):164–189, 1927.
  • Jaggi et al. (2010) Jaggi, M., Sulovsk, M., and others. A simple algorithm for nuclear norm regularized problems. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 471–478, 2010.
  • Joulin et al. (2017) Joulin, A., Grave, E., Bojanowski, P., Nickel, M., and Mikolov, T. Fast linear model for knowledge graph embeddings. arXiv preprint arXiv:1710.10881, 2017.
  • Kadlec et al. (2017) Kadlec, R., Bajgar, O., and Kleindienst, J. Knowledge Base Completion: Baselines Strike Back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 69–74, 2017.
  • Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Kolda & Bader (2009) Kolda, T. G. and Bader, B. W. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009.
  • Koren et al. (2009) Koren, Y., Bell, R., and Volinsky, C. Matrix factorization techniques for recommender systems. Computer, 42(8), 2009.
  • Krompass et al. (2015) Krompass, D., Baier, S., and Tresp, V. Type-constrained representation learning in knowledge graphs. In International Semantic Web Conference, pp. 640–655. Springer, 2015.
  • Lao et al. (2011) Lao, N., Mitchell, T., and Cohen, W. W. Random walk inference and learning in a large scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 529–539. Association for Computational Linguistics, 2011.
  • Ma et al. (2017) Ma, S., Ding, J., Jia, W., Wang, K., and Guo, M. Transt: Type-based multiple embedding representations for knowledge graph completion. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 717–733. Springer, 2017.
  • Negahban & Wainwright (2012) Negahban, S. and Wainwright, M. J. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13(May):1665–1697, 2012.
  • Nguyen (2017) Nguyen, D. Q. An overview of embedding models of entities and relationships for knowledge base completion. arXiv preprint arXiv:1703.08098, 2017.
  • Nickel et al. (2011) Nickel, M., Tresp, V., and Kriegel, H.-P. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 809–816, 2011.
  • Nickel et al. (2016a) Nickel, M., Murphy, K., Tresp, V., and Gabrilovich, E. A Review of Relational Machine Learning for Knowledge Graphs. Proceedings of the IEEE, 104(1):11–33, 2016a.
  • Nickel et al. (2016b) Nickel, M., Rosasco, L., Poggio, T. A., et al. Holographic embeddings of knowledge graphs. 2016b.
  • Rennie & Srebro (2005) Rennie, J. D. and Srebro, N. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd international conference on Machine learning, pp. 713–719. ACM, 2005.
  • Shen et al. (2016) Shen, Y., Huang, P.-S., Chang, M.-W., and Gao, J. Implicit reasonet: Modeling large-scale structured relationships with shared memory. 2016.
  • Srebro & Salakhutdinov (2010) Srebro, N. and Salakhutdinov, R. R. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In Advances in Neural Information Processing Systems, pp. 2056–2064, 2010.
  • Srebro et al. (2005) Srebro, N., Rennie, J., and Jaakkola, T. S. Maximum-margin matrix factorization. In Advances in neural information processing systems, pp. 1329–1336, 2005.
  • Tomioka et al. (2010) Tomioka, R., Hayashi, K., and Kashima, H. Estimation of low-rank tensors via convex optimization. arXiv preprint arXiv:1010.0789, 2010.
  • Toutanova & Chen (2015) Toutanova, K. and Chen, D. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pp. 57–66, 2015.
  • Trouillon et al. (2016) Trouillon, T., Welbl, J., Riedel, S., Gaussier, E., and Bouchard, G. Complex embeddings for simple link prediction. In International Conference on Machine Learning, pp. 2071–2080, 2016.
  • Wimalawarne et al. (2014) Wimalawarne, K., Sugiyama, M., and Tomioka, R.

    Multitask learning meets tensor factorization: task imputation via convex optimization.

    In Advances in neural information processing systems, pp. 2825–2833, 2014.
  • Yang et al. (2014) Yang, B., Yih, W.-t., He, X., Gao, J., and Deng, L. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575, 2014.
  • Yuan & Zhang (2016) Yuan, M. and Zhang, C.-H. On tensor completion via nuclear norm minimization. Foundations of Computational Mathematics, 16(4):1031–1068, 2016.

8 Appendix

8.1 DistMult on hierarchical predicates

Suppose we are trying to embed a single hierarchical predicate which is a -ary tree of depth . This tree will have leaves, root and internal nodes. We forget any modeling issues we might have, but focus on the symmetricity assumption in Distmult.

Leaves and the root only appear on one side of the queries and hence won’t have any problems with the symmetricity. We now focus on an internal node . It has children and one ancestor . Assuming , the MRR associated with this node will be higher if the query yields the ranked list . Indeed, the filtered rank of the n queries will be while the filtered rank of the query will be .

Counting the number of queries for which the filtered rank is , we see that they far outweigh the queries for which the filtered rank is in the final filtered MRR. For each internal nodes, queries lead to a rank of , and only to a rank of . For the root, queries with a rank of , for the leaves, queries with a rank of .

Our final filtered MRR is :


Hence for big hierarchies such as hyponym or hypernym in WN, we expect the filtered MRR of DistMult to be high even though its modeling assumptions are incorrect.

8.2 Proofs

Lemma 2.

Moreover, the minimizers of the left-hand side satisfy:


First, we characterize the minima :


We study a summand, for :

Using constrained optimization techniques, we obtain that this minimum is obtained for :

and has value , which completes the proof. ∎

Proposition 2.

The function over third order-tensors of defined as

is not convex.


We first study elements of , tensors of order associated with matrices of size . We have that


Let , the mean of these two matrices. Identifying with a tensor to obtain the decomposition yielding , we have that the matrix can be written as . This comes from the fact that is a normalized vector, so its only entry is equal to . We then write that trace by Cauchy-Schwarz. Hence . Moreover, we have with equality only for with at most one non-zero coordinate. Since is of rank , its representation has at least non-zero coordinates, hence , which contradicts convexity. This proof can naturally be extended to tensors of any sizes. ∎

8.3 Best Published results

We report in Table 4 the references for each of the results in Table 2 in the article.

Model Metric Result Reference
WN18 MRR Trouillon et al. (2016)
H@10 Ma et al. (2017)
WN18RR MRR Dettmers et al. (2017)
H@10 Dettmers et al. (2017)
FB15K MRR Kadlec et al. (2017)
H@10 Shen et al. (2016)
FB15K-237 MRR Dettmers et al. (2017)
H@10 Dettmers et al. (2017)
YAGO3-10 MRR Dettmers et al. (2017)
H@10 Dettmers et al. (2017)
Table 4: References for the Best Published row in Table 2