1 Introduction
Knowledge graphs are collections of factual triplets, where each triplet represents a relation between a head entity and a tail entity . Examples of realworld knowledge graphs include Freebase (Bollacker et al., 2008), Yago (Suchanek et al., 2007), and WordNet (Miller, 1995). Knowledge graphs are potentially useful to a variety of applications such as questionanswering (Hao et al., 2017), information retrieval (Xiong et al., 2017), recommender systems (Zhang et al., 2016)
, and natural language processing
(Yang & Mitchell, 2017). Research on knowledge graphs is attracting growing interests in both academia and industry communities.Since knowledge graphs are usually incomplete, a fundamental problem for knowledge graph is predicting the missing links. Recently, extensive studies have been done on learning lowdimensional representations of entities and relations for missing link prediction (a.k.a., knowledge graph embedding) (Bordes et al., 2013; Trouillon et al., 2016; Dettmers et al., 2017). These methods have been shown to be scalable and effective. The general intuition of these methods is to model and infer the connectivity patterns in knowledge graphs according to the observed knowledge facts. For example, some relations are symmetric (e.g., marriage) while others are antisymmetric (e.g., filiation); some relations are the inverse of other relations (e.g., hypernym and hyponym); and some relations may be composed by others (e.g., my mother’s husband is my father). It is critical to find ways to model and infer these patterns, i.e., symmetry/antisymmetry, inversion, and composition, from the observed facts in order to predict missing links.
Indeed, many existing approaches have been trying to either implicitly or explicitly model one or a few of the above relation patterns (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015b; Yang et al., 2014; Trouillon et al., 2016). For example, the TransE model (Bordes et al., 2011), which represents relations as translations, aims to model the inversion and composition patterns; the DisMult model (Yang et al., 2014), which models the threeway interactions between head entities, relations, and tail entities, aims to model the symmetry pattern. However, none of existing models is capable of modeling and inferring all the above patterns. Therefore, we are looking for an approach that is able to model and infer all the three types of relation patterns.
In this paper, we propose such an approach called RotatE for knowledge graph embedding. Our motivation is from Euler’s identity , which indicates that a unitary complex number can be regarded as a rotation in the complex plane. Specifically, the RotatE model maps the entities and relations to the complex vector space and defines each relation as a rotation from the source entity to the target entity. Given a triplet , we expect that , where are the embeddings, the modulus and denotes the Hadamard (elementwise) product. Specifically, for each dimension in the complex space, we expect that:
(1) 
It turns out that such a simple operation can effectively model all the three relation patterns: symmetric/antisymmetric, inversion, and composition. For example, a relation is symmetric if and only if each element of its embedding , i.e. , satisfies ; two relations and are inverse if and only if their embeddings are conjugates: ; a relation is a combination of other two relations and if and only if (i.e. ). Moreover, the RotatE model is scalable to large knowledge graphs as it remains linear in both time and memory.
To effectively optimizing the RotatE, we further propose a novel selfadversarial negative sampling technique, which generates negative samples according to the current entity and relation embeddings. The proposed technique is very general and can be applied to many existing knowledge graph embedding models. We evaluate the RotatE on four large knowledge graph benchmark datasets including FB15k (Bordes et al., 2013), WN18 (Bordes et al., 2013), FB15k237 (Toutanova & Chen, 2015) and WN18RR (Dettmers et al., 2017). Experimental results show that the RotatE model significantly outperforms existing stateoftheart approaches. In addition, RotatE also outperforms stateoftheart models on Countries (Bouchard et al., 2015), a benchmark explicitly designed for composition pattern inference and modeling. To the best of our knowledge, RotatE is the first model that achieves stateoftheart performance on all the benchmarks.^{1}^{1}1The codes of our paper are available online: https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding.
2 Related Work
Model  Score Function  

SE (Bordes et al., 2011)  
TransE (Bordes et al., 2013)  
TransX  
DistMult (Yang et al., 2014)  
ComplEx (Trouillon et al., 2016)  
HolE (Nickel et al., 2016)  
ConvE (Dettmers et al., 2017)  
RotatE  ^{2}^{2}2The norm of a complex vector is defined as . We use L1norm for all distancebased models in this paper and drop the subscript of for brevity. 
denotes activation function and
denotes 2D convolution. denotes conjugate for complex vectors, and 2D reshaping for real vectors in ConvE model. TransX represents a wide range of TransE’s variants, such as TransH (Wang et al., 2014), TransR (Lin et al., 2015b), and STransE (Nguyen et al., 2016), where denotes a matrix multiplication with respect to relation .Predicting missing links with knowledge graph embedding (KGE) methods has been extensively investigated in recent years. The general methodology is to define a score function for the triplets. Formally, let denote the set of entities and denote the set of relations, then a knowledge graph is a collection of factual triplets , where and . Since entity embeddings are usually represented as vectors, the score function usually takes the form , where and are head and tail entity embeddings. The score function measures the salience of a candidate triplet . The goal of the optimization is usually to score true triplet higher than the corrupted false triplets or . Table 1 summarizes different score functions in previous stateoftheart methods as well as the model proposed in this paper. These models generally capture only a portion of the relation patterns. For example, TransE represents each relation as a bijection between source entities and target entities, and thus implicitly models inversion and composition of relations, but it cannot model symmetric relations; ComplEx extends DistMult by introducing complex embeddings so as to better model asymmetric relations, but it cannot infer the composition pattern. The proposed RotatE model leverages the advantages of both.
A relevant and concurrent work to our work is the TorusE (Ebisu & Ichise, 2018) model, which defines knowledge graph embedding as translations on a compact Lie group. The TorusE model can be regarded as a special case of RotatE, where the modulus of embeddings are set fixed; our RotatE is defined on the entire complex space, which has much more representation capacity. Our experiments show that this is very critical for modeling and inferring the composition patterns. Moreover, TorusE focuses on the problem of regularization in TransE while this paper focuses on modeling and inferring multiple types of relation patterns.
There are also a large body of relational approaches for modeling the relational patterns on knowledge graphs (Lao et al., 2011; Neelakantan et al., 2015; Das et al., 2016; Rocktäschel & Riedel, 2017; Yang et al., 2017). However, these approaches mainly focus on explicitly modeling the relational paths while our proposed RotatE model implicitly learns the relation patterns, which is not only much more scalable but also provides meaningful embeddings for both entities and relations.
Another related problem is how to effectively draw negative samples for training knowledge graph embeddings. This problem has been explicitly studied by Cai & Wang (2017), which proposed a generative adversarial learning framework to draw negative samples. However, such a framework requires simultaneously training the embedding model and a discrete negative sample generator, which are difficult to optimize and also computationally expensive. We propose a selfadversarial sampling scheme which only relies on the current model. It does require any additional optimization component, which make it much more efficient.
3 RotatE: Relational Rotation in Complex Vector Space
In this section, we introduce our proposed RotatE model. We first introduce three important relation patterns that are widely studied in the literature of link prediction on knowledge graphs. Afterwards, we introduce our proposed RotatE model, which defines relations as rotations in complex vector space. We also show that the RotatE model is able to model and infer all three relation patterns.
Model  Score Function  Symmetry  Antisymmetry  Inversion  Composition 
SE  ✗  ✗  ✗  ✗  
TransE  ✗  ✓  ✓  ✓  
TransX  ✓  ✓  ✗  ✗  
DistMult  ✓  ✗  ✗  ✗  
ComplEx  ✓  ✓  ✓  ✗  
RotatE  ✓  ✓  ✓  ✓ 
3.1 Modeling and Inferring Relation Patterns
The key of link prediction in knowledge graph is to infer the connection patterns, e.g., relation patterns, with observed facts. According to the existing literature (Trouillon et al., 2016; Toutanova & Chen, 2015; Guu et al., 2015; Lin et al., 2015a), three types of relation patterns are very important and widely spread in knowledge graphs: symmetry, inversion and composition. We give their formal definition here:
Definition 1.
A relation is symmetric (antisymmetric) if
A clause with such form is a symmetry (antisymmetry) pattern.
Definition 2.
Relation is inverse to relation if
A clause with such form is a inversion pattern.
Definition 3.
Relation is composed of relation and relation if
A clause with such form is a composition pattern.
According to the definition of the above three types of relation patterns, we provide an analysis of existing models on their abilities in inferring and modeling these patterns. Specifically, we provide an analysis on TransE, TransX, DistMult, and ComplEx.^{3}^{3}3See discussion at Appendix A We did not include the analysis on HolE and ConvE since HolE is equivalent to ComplEx (Hayashi & Shimbo, 2017)
, and ConvE is a black box that involves twolayer neural networks and convolution operations, which are hard to analyze. The results are summarized into Table
2. We can see that no existing approaches are capable of modeling all the three relation patterns.3.2 Modeling Relations as Rotations in Complex Vector Space
In this part, we introduce our proposed model that is able to model and infer all the three types of relation patterns. Inspired by Euler’s identity, we map the head and tail entities to the complex embeddings, i.e., ; then we define the functional mapping induced by each relation as an elementwise rotation from the head entity to the tail entity . In other words, given a triple , we expect that:
(2) 
and is the Hadmard (or elementwise) product. Specifically, for each element in the embeddings, we have . Here, we constrain the modulus of each element of , i.e., , to be . By doing this, is of the form , which corresponds to a counterclockwise rotation by radians about the origin of the complex plane, and only affects the phases of the entity embeddings in the complex vector space. We refer to the proposed model as RotatE due to its rotational nature. According to the above definition, for each triple , we define the distance function of RotatE as:
(3) 
By defining each relation as a rotation in the complex vector spaces, RotatE can model and infer all the three types of relation patterns introduced above. Formally, we have following results^{4}^{4}4We relegate all proofs to the appendix.:
Lemma 1.
RotatE can infer the symmetry/antisymmetry pattern. (See proof in Appendix B)
Lemma 2.
RotatE can infer the inversion pattern. (See proof in Appendix C)
Lemma 3.
RotatE can infer the composition pattern. (See proof in Appendix D)
These results are also summarized into Table 2. We can see that the RotatE model is the only model that can model and infer all the three types of relation patterns.
Connection to TransE.
From Table 2, we can see that TransE is able to infer and model all the other relation patterns except the symmetry pattern. The reason is that in TransE, any symmetric relation will be represented by a translation vector. As a result, this will push the entities with symmetric relations to be close to each other in the embedding space. RotatE solves this problem and is a able to model and infer the symmetry pattern. An arbitrary vector that satisfies can be used for representing a symmetric relation in RotatE, and thus the entities having symmetric relations can be distinguished. Different symmetric relations can be also represented with different embedding vectors. Figure 1 provides illustrations of TransE and RotatE with only 1dimensional embedding and shows how RotatE models a symmetric relation.
3.3 Optimization
Negative sampling has been proved quite effective for both learning knowledge graph embedding (Trouillon et al., 2016) and word embedding (Mikolov et al., 2013)
. Here we use a loss function similar to the negative sampling loss
(Mikolov et al., 2013) for effectively optimizing distancebased models:(4) 
where is a fixed margin,
is the sigmoid function, and
is the th negative triplet.We also propose a new approach for drawing negative samples. The negative sampling loss samples the negative triplets in a uniform way. Such a uniform negative sampling suffers the problem of inefficiency since many samples are obviously false as training goes on, which does not provide any meaningful information. Therefore, we propose an approach called selfadversarial negative sampling, which samples negative triples according to the current embedding model. Specifically, we sample negative triples from the following distribution:
(5) 
where
is the temperature of sampling. Moreover, since the sampling procedure may be costly, we treat the above probability as the weight of the negative sample. Therefore, the final negative sampling loss with selfadversarial training takes the following form:
(6) 
In the experiments, we will compare different approaches for negative sampling.
4 Experiments
4.1 Experimental Setting
Dataset  #entity  #relation  #training  #validation  #test 
FB15k  14,951  1,345  483,142  50,000  59,071 
WN18  40,943  18  141,442  5,000  5,000 
FB15k237  14,541  237  272,115  17,535  20,466 
WN18RR  40,943  11  86,835  3,034  3,134 
We evaluate our proposed model on four widely used knowledge graphs. The statistics of these knowledge graphs are summarized into Table 3.

FB15k (Bordes et al., 2013) is a subset of Freebase (Bollacker et al., 2008), a largescale knowledge graph containing general knowledge facts. Toutanova & Chen (2015) showed that almost of the test triplets can be inferred via a directly linked triplet or . Therefore, the key of link prediction on FB15k is to model and infer the symmetry/antisymmetry and inversion patterns.

FB15k237 (Toutanova & Chen, 2015) is a subset of FB15k, where inverse relations are deleted. Therefore, the key of link prediction on FB15k237 boils down to model and infer the symmetry/antisymmetry and composition patterns.

WN18RR (Dettmers et al., 2017) is a subset of WN18. The inverse relations are deleted, and the main relation patterns are symmetry/antisymmetry and composition.
Hyperparameter Settings.
We use Adam (Kingma & Ba, 2014)
as the optimizer and finetune the hyperparameters on the validation dataset. The ranges of the hyperparameters for the grid search are set as follows: embedding dimension
^{5}^{5}5Following Trouillon et al. (2016), we treat complex number as the same as real number with regard to the embedding dimension. If the same number of dimension is used for both the real and imaginary parts of the complex number as the real number, the number of parameters for the complex embedding would be twice the number of parameters for the embeddings in the real space. , batch size , selfadversarial sampling temperature , and fixed margin . Both the real and imaginary parts of the entity embeddings are uniformly initialized, and the phases of the relation embeddings are uniformly initialized between and . No regularization is used since we find that the fixed margin could prevent our model from overfitting.Evaluation Settings.
We evaluate the performance of link prediction in the filtered setting: we rank test triples against all other candidate triples not appearing in the training, validation, or test set, where candidates are generated by corrupting subjects or objects: or . Mean Rank (MR), Mean Reciprocal Rank (MRR) and Hits at N (H@N) are standard evaluation measures for these datasets and are evaluated in our experiments.
Baseline.
Apart from RotatE, we propose a variant of RotatE as baseline, where the modulus of the entity embeddings are also constrained: , and the distance function is thus (See Equation 17 at Appendix F for a detailed derivation). In this way, we can investigate how RotatE works without modulus information and with only phase information. We refer to the baseline as pRotatE. It is obvious to see that pRotatE can also model and infer all the three relation patterns.
FB15k  WN18  
MR  MRR  H@1  H@3  H@10  MR  MRR  H@1  H@3  H@10  
TransE []    .463  .297  .578  .749    .495  .113  .888  .943 
DistMult []  42  .798      .893  655  .797      .946 
HolE    .524  .402  .613  .739    .938  .930  .945  .949 
ComplEx    .692  .599  .759  .840    .941  .936  .945  .947 
ConvE  51  .657  .558  .723  .831  374  .943  .935  .946  .956 
pRotatE  43  .799  .750  .829  .884  254  .947  .942  .950  .957 
RotatE  40  .797  .746  .830  .884  309  .949  .944  .952  .959 
FB15k237  WN18RR  
MR  MRR  H@1  H@3  H@10  MR  MRR  H@1  H@3  H@10  
TransE []  357  .294      .465  3384  .226      .501 
DistMult  254  .241  .155  .263  .419  5110  .43  .39  .44  .49 
ComplEx  339  .247  .158  .275  .428  5261  .44  .41  .46  .51 
ConvE  244  .325  .237  .356  .501  4187  .43  .40  .44  .52 
pRotatE  178  .328  .230  .365  .524  2923  .462  .417  .479  .552 
RotatE  177  .338  .241  .375  .533  3340  .476  .428  .492  .571 
4.2 Main Results
We compare RotatE to several stateoftheart models, including TransE (Bordes et al., 2013), DistMult (Yang et al., 2014), ComplEx (Trouillon et al., 2016), HolE (Nickel et al., 2016), and ConvE (Dettmers et al., 2017), as well as our baseline model pRotatE, to empirically show the importance of modeling and inferring the relation patterns for the task of predicting missing links.
Table 4 summarizes our results on FB15k and WN18. We can see that RotatE outperforms all the stateoftheart models. The performance of pRotatE and RotatE are similar on these two datasets. Table 5 summarizes our results on FB15k237 and WN18RR, where the improvement is much more significant. The difference between RotatE and pRotatE is much larger on FB15k237 and WN18RR, where there are a lot of composition patterns. This indicates that modulus is very important for modeling and inferring the composition pattern.
Moreover, the performance of these models on different datasets is consistent with our analysis on the three relation patterns (Table 2):

On FB15K, the main relation patterns are symmetry/antisymmetry and inversion. We can see that ComplEx performs well while TransE does not perform well since ComplEx can infer both symmetry/antisymmetry and inversion patterns while TransE cannot infer symmetry pattern. Surprisingly, DistMult achieves good performance on this dataset although it cannot model the antisymmetry and inversion patterns. The reason is that for most of the relations in FB15K, the types of head entities and tail entities are different. Although DistMult gives the same score to a true triplet and its opposition triplet , is usually impossible to be valid since the entity type of does not match the head entity type of . For example, DistMult assigns the same score to and . But can be simply predicted as false since cannot be the head entity of the relation .

On WN18, the main relation patterns are also symmetry/antisymmetry and inversion. As expected, ComplEx still performs very well on this dataset. However, different from the results on FB15K, the performance of DistMult significantly decreases on WN18. The reason is that DistMult cannot model antisymmetry and inversion patterns, and almost all the entities in WN18 are words and belong to the same entity type, which do not have the same problem as FB15K.

On FB15k237, the main relation pattern is composition. We can see that TransE performs really well while ComplEx does not perform well. The reason is that, as discussed before, TransE is able to infer the composition pattern while ComplEx cannot infer the composition pattern.

On WN18RR, one of the main relation patterns is the symmetry pattern since almost each word has a symmetric relation in WN18RR, e.g., and . TransE does not well on this dataset since it is not able to model the symmetric relations.
Countries (AUCPR)  

DistMult  ComplEx  ConvE  RotatE  
S1  
S2  
S3 
4.3 Inferring Relation Patterns on Countries DataSet
We also evaluate our model on the Countries dataset (Bouchard et al., 2015; Nickel et al., 2016), which is carefully designed to explicitly test the capabilities of the link prediction models for composition pattern modeling and inferring. It contains 2 relations and 272 entities (244 countries, 5 regions and 23 subregions). Unlike link prediction on general knowledge graphs, the queries in Countries are of the form , and the answer is one of the five regions. The Countries dataset has 3 tasks, each requiring inferring a composition pattern with increasing length and difficulty. For example, task S2 requires inferring a relatively simpler composition pattern:
while task S3 requires inferring the most complex composition pattern:
In Table 6, we report the results with respect to the AUCPR metric, which is commonly used in the literature. We can see that RotatE outperforms all the previous models. The performance of RotatE is significantly better than other methods on S3, which is the most difficult task.
4.4 Implicit Relation Pattern Inference
In this section, we verify whether the relation patterns are implicitly represented by RotatE relation embeddings. We ignore the specific positions in the relation embedding and plot the histogram of the phase of each element in the relation embedding, i.e., {}.
Symmetry pattern requires the symmetric relations to have property , and the solution is . We investigate the relation embeddings from a dimensional RotatE trained on WN18. Figure 1(a) gives the histogram of the embedding phases of a symmetric relation . We can find that the embedding phases are either () or (). It indicates that the RotatE model does infer and model the symmetry pattern. Figure 1(b) is the histogram of relation , which shows that the embedding of a general relation does not have such a pattern.
Inversion pattern requires the embeddings of a pair of inverse relations to be conjugate. We use the same RotatE model trained on WN18 for an analysis. Figure 1(c) illustrates the elementwise addition of the embedding phases from relation and its inversed relation . All the additive embedding phases are or , which represents that . This case shows that the inversion pattern is also inferred and modeled in the RotatE model.
Composition pattern requires the embedding phases of the composed relation to be the addition of the other two relations. Since there is no significant composition pattern in WN18, we study the inference of the composition patterns on FB15k237, where a dimensional RotatE is trained. Figure 1(d)  1(g) illustrate such a case, where or .
More results of implicitly inferring basic patterns are presented in the appendix.
4.5 Comparing different negative sampling techniques
In this part, we compare different negative sampling techniques including uniform sampling, our proposed selfadversarial technique, and the KBGAN model (Cai & Wang, 2017), which aims to optimize a generative adversarial network to generate the negative samples. We reimplement a dimension TransE model with the marginbased ranking criterion that was used in (Cai & Wang, 2017), and evaluate its performance on FB15k237, WN18RR and WN18 with selfadversarial negative sampling. Table 7 summarizes our results. We can see that selfadversarial sampling is the most effective negative sampling technique.
FB15k237  WN18RR  WN18  
MRR  H@10  MRR  H@10  MRR  H@10  
uniform  .242  .422  .186  .459  .433  .915 
KBGAN (Cai & Wang, 2017)  .278  .453  .210  .479  .705  .949 
selfadversarial  .298  .475  .223  .510  .736  .947 
FB15k  FB15k237  Countries (AUCROC)  
MRR  H@10  MRR  H@10  S1  S2  S3  
TransE  .735  .871  .332  .531  
ComplEx  .780  .890  .319  .509  
RotatE  .797  .884  .338  .533 
4.6 Further Experiments on TransE and ComplEx
One may argue that the contribution of RotatE comes from the selfadversarial negative sampling technique. In this part, we conduct further experiments on TransE and ComplEx in the same setting as RotatE to make a fair comparison among the three models. Table 8 shows the results of TransE and ComplEx trained with the selfadversarial negative sampling technique on FB15k and FB15k237 datasets, where a large number of relations are available. In addition, we evaluate these three models on the Countries dataset, which explicitly requires inferring the composition pattern. We also provide a detailed ablation study on TransE and RotatE in the appendix.
From Table 8, we can see that similar results are observed as Table 4 and 5. The RotatE model achieves the best performance on both FB15k and FB15k237, as it is able to model all the three relation patterns. The TransE model does not work well on the FB15k datasets, which requires modeling the symmetric pattern; the ComplEx model does not work well on FB15k237, which requires modeling the composition pattern. The results on the Countries dataset are a little bit different, where the TransE model slightly outperforms RoateE on the S3 task. The reason is that the Countries datasets do not have the symmetric relation between different regions, and all the three tasks in the Countries datasets only require inferring the region for a given city. Therefore, the TransE model does not suffer from its inability of modeling symmetric relations. For ComplEx, we can see that it does not perform well on Countries since it cannot infer the composition pattern.
Relation Category  1to1  1toN  Nto1  NtoN  1to1  1toN  Nto1  NtoN 
Tasks  Prediction Head (Hits@10)  Prediction Tail (Hits@10)  
TransE  .437  .657  .182  .472  .437  .197  .667  .500 
TransH (bern)  .668  .876  .287  .645  .655  .398  .833  .672 
KG2E_KL (bern)  .923  .946  .660  .696  .926  .679  .944  .734 
TransE  .894  .972  .567  .880  .879  .671  .964  .910 
ComplEx  .939  .969  .692  .893  .938  .823  .952  .910 
RotatE  .922  .967  .602  .893  .923  .713  .961  .922 
Tasks  Prediction Head (MRR)  Prediction Tail (MRR)  
TransE  .701  .912  .424  .737  .701  .561  .894  .761 
ComplEx  .832  .914  .543  .787  .826  .661  .869  .800 
RotatE  .878  .934  .465  .803  .872  .611  .909  .832 
4.7 Experimental results on FB15k by relation category
We also did some further investigation on the performance of RotatE on different relation categories: onetomany, manytoone, and manytomany relations^{6}^{6}6Following Wang et al. (2014), for each relation , we compute the average number of tails per head () and the average number of head per tail (). If and , is treated as onetoone; if and , is treated as a manytomany; if and , is treated as onetomany.. The results of RotatE on different relation categories on the data set FB15k are summarized into Table 9. We also compare an additional approach KG2E_KL (He et al., 2015), which is a probabilistic framework for knowledge graph embedding methods and aims to model the uncertainties of the entities and relations in knowledge graphs with the TransE model. We also summarize the statistics of different relation categories into Table 10 in the appendix.
We can see that besides the onetoone relation, the RotatE model also performs quite well on the noninjective relations, especially on manytomany relations. We also notice that the probabilistic framework KG2E_KL(bern) (He et al., 2015) is quite powerful, which consistently outperforms its corresponding knowledge graph embedding model, showing the importance of modeling the uncertainties in knowledge graphs. We leave the work of modeling the uncertainties in knowledge graphs with RotatE as our future work.
5 Conclusion
We have proposed a new knowledge graph embedding method called RotatE, which represents entities as complex vectors and relations as rotations in complex vector space. In addition, we propose a novel selfadversarial negative sampling technique for efficiently and effectively training the RotatE model. Our experimental results show that the RotatE model outperforms all existing stateoftheart models on four largescale benchmarks. Moreover, RotatE also achieves stateoftheart results on a benchmark that is explicitly designed for composition pattern inference and modeling. A deep investigation into RotatE relation embeddings shows that the three relation patterns are implicitly represented in the relation embeddings. In the future, we plan to evaluate the RotatE model on more datasets and leverage a probabilistic framework to model the uncertainties of entities and relations.
References
 Bollacker et al. (2008) Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pp. 1247–1250. AcM, 2008.
 Bordes et al. (2011) Antoine Bordes, Jason Weston, Ronan Collobert, Yoshua Bengio, et al. Learning structured embeddings of knowledge bases. In AAAI, volume 6, pp. 6, 2011.
 Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pp. 2787–2795, 2013.
 Bouchard et al. (2015) Guillaume Bouchard, Sameer Singh, and Theo Trouillon. On approximate reasoning capabilities of lowrank vector spaces. AAAI Spring Syposium on Knowledge Representation and Reasoning (KRR): Integrating Symbolic and Neural Approaches, 2015.
 Cai & Wang (2017) Liwei Cai and William Yang Wang. Kbgan: Adversarial learning for knowledge graph embeddings. arXiv preprint arXiv:1711.04071, 2017.
 Das et al. (2016) Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. Chains of reasoning over entities, relations, and text using recurrent neural networks. arXiv preprint arXiv:1607.01426, 2016.
 Dettmers et al. (2017) Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. arXiv preprint arXiv:1707.01476, 2017.

Ebisu & Ichise (2018)
Takuma Ebisu and Ryutaro Ichise.
Toruse: Knowledge graph embedding on a lie group.
In
Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence
, pp. 1819–1826. AAAI Press, 2018.  Guu et al. (2015) Kelvin Guu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. arXiv preprint arXiv:1506.01094, 2015.
 Hao et al. (2017) Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao. An endtoend model for question answering over knowledge base with crossattention combining global knowledge. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 221–231, 2017.
 Hayashi & Shimbo (2017) Katsuhiko Hayashi and Masashi Shimbo. On the equivalence of holographic and complex embeddings for link prediction. arXiv preprint arXiv:1702.05563, 2017.
 He et al. (2015) Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. Learning to represent knowledge graphs with gaussian embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 623–632. ACM, 2015.
 Kadlec et al. (2017) Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. Knowledge base completion: Baselines strike back. arXiv preprint arXiv:1705.10744, 2017.
 Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Lao et al. (2011) Ni Lao, Tom Mitchell, and William W Cohen. Random walk inference and learning in a large scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 529–539. Association for Computational Linguistics, 2011.
 Lin et al. (2015a) Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. Modeling relation paths for representation learning of knowledge bases. arXiv preprint arXiv:1506.00379, 2015a.
 Lin et al. (2015b) Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. Learning entity and relation embeddings for knowledge graph completion. In AAAI, volume 15, pp. 2181–2187, 2015b.
 Mahdisoltani et al. (2013) Farzaneh Mahdisoltani, Joanna Biega, and Fabian M Suchanek. Yago3: A knowledge base from multilingual wikipedias. In CIDR, 2013.
 Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119, 2013.
 Miller (1995) George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41, 1995.
 Neelakantan et al. (2015) Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. Compositional vector space models for knowledge base completion. arXiv preprint arXiv:1504.06662, 2015.
 Nguyen et al. (2017) Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. A novel embedding model for knowledge base completion based on convolutional neural network. arXiv preprint arXiv:1712.02121, 2017.
 Nguyen et al. (2016) Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. Stranse: a novel embedding model of entities and relationships in knowledge bases. arXiv preprint arXiv:1606.08140, 2016.
 Nickel et al. (2016) Maximilian Nickel, Lorenzo Rosasco, Tomaso A Poggio, et al. Holographic embeddings of knowledge graphs. In AAAI, volume 2, pp. 3–2, 2016.
 Rocktäschel & Riedel (2017) Tim Rocktäschel and Sebastian Riedel. Endtoend differentiable proving. In Advances in Neural Information Processing Systems, pp. 3788–3800, 2017.
 Suchanek et al. (2007) Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pp. 697–706. ACM, 2007.
 Toutanova & Chen (2015) Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pp. 57–66, 2015.

Trouillon et al. (2016)
Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and
Guillaume Bouchard.
Complex embeddings for simple link prediction.
In
International Conference on Machine Learning
, pp. 2071–2080, 2016. 
Wang et al. (2014)
Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen.
Knowledge graph embedding by translating on hyperplanes.
In AAAI, volume 14, pp. 1112–1119, 2014.  Xiong et al. (2017) Chenyan Xiong, Russell Power, and Jamie Callan. Explicit semantic ranking for academic search via knowledge graph embedding. In Proceedings of the 26th international conference on world wide web, pp. 1271–1279. International World Wide Web Conferences Steering Committee, 2017.
 Yang & Mitchell (2017) Bishan Yang and Tom Mitchell. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 1436–1446, 2017.
 Yang et al. (2014) Bishan Yang, Wentau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575, 2014.
 Yang et al. (2017) Fan Yang, Zhilin Yang, and William W Cohen. Differentiable learning of logical rules for knowledge base completion. CoRR, abs/1702.08367, 2017.
 Zhang et al. (2016) Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and WeiYing Ma. Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 353–362. ACM, 2016.
Appendix
Appendix A Discussion on the Ability of Pattern Modeling and Inference
No existing models are capable of modeling all the three relation patterns. For example, TransE cannot model the symmetry pattern because it would yield for symmetric relations; TransX can infer and model the symmetry/antisymmetry pattern when , e.g. in TransH (Wang et al., 2014), but cannot infer inversion and composition as and
are invertible matrix multiplications; due to its symmetric nature, DistMult is difficult to model the asymmetric and inversion pattern; ComplEx addresses the problem of DisMult and is able to infer both the symmetry and asymmetric patterns with complex embeddings. Moreover, it can infer inversion rules because the complex conjugate of the solution to
is exactly the solution to . However, ComplEx cannot infer composition rules, since it does not model a bijection mapping from to via relation . These concerns are summarized in Table 2.Appendix B Proof of Lemma 1
Proof.
if and hold, we have
Otherwise, if and hold, we have
Appendix C Proof of Lemma 2
Proof.
if and hold, we have
Appendix D Proof of Lemma 3
Proof.
if , and hold, we have
Appendix E Properties of RotatE
A useful property for RotatE is that the inverse of a relation can be easily acquired by complex conjugate. In this way, the RotatE model treats head and tail entities in a uniform way, which is potentially useful for efficient 1N scoring (Dettmers et al., 2017):
(7) 
Moreover, considering the embeddings in the polar form, i.e., , we can rewrite the RotatE distance function as:
(8) 
This equation provides two interesting views of the model:
(1) When we constrain the modulus , the distance function is reduced to . We can see that this is very similar to the distance function of TransE: . Based on this intuition, we can show that:
Theorem 4.
RotatE can degenerate into TransE. (See proof at Appendix F)
which indicates that RotatE is able to simulate TransE.
(2) The modulus provides the lower bound of the distance function, which is .
Appendix F Proof of Theorem 4
Proof.
By further restricting , we can rewrite by
(9)  
(10)  
(11) 
Therefore, we have
(13)  
(14)  
(15)  
(16)  
(17) 
If the embedding of in TransE is , let and , we have
Relation Category  1to1  1toN  Nto1  NtoN 

#relation  326  308  388  323 
#triplet (train)  6827  42509  70727  363079 
#triplet (test)  832  5259  8637  44343 
YAGO310  
MR  MRR  H@1  H@3  H@10  
DistMult  5926  .34  .24  .38  .54 
ComplEx  6351  .36  .26  .40  .55 
ConvE  1671  .44  .35  .49  .62 
RotatE  1767  .495  .402  .550  .670 
Appendix G Link Prediction on YAGO310
YAGO310 is a subset of YAGO3 (Mahdisoltani et al., 2013), which consists of entities that have a minimum of 10 relations each. It has 123,182 entities and 37 relations. Most of the triples deal with descriptive attributes of people, such as citizenship, gender, profession and marital status.
Table 11 shows that the RotatE model also outperforms stateoftheart models on YAGO310.
Benchmark  embedding dimension  batch size  negative samples  

FB15k  1000  2048  128  1.0  24 
WN18  500  512  1024  0.5  12 
FB15k237  1000  1024  256  1.0  9 
WN18RR  500  512  1024  0.5  6 
Countries S1  500  512  64  1.0  0.1 
Countries S2  500  512  64  1.0  0.1 
Countries S3  500  512  64  1.0  0.1 
YAGO310  500  1024  400  1.0  24 
Appendix H Hyperparameters
We list the best hyperparameter setting of RotatE w.r.t the validation dataset on several benchmarks in Table 12.
Appendix I Ablation Study
Table 13 shows our ablation study of selfadversarial sampling and negative sampling loss on FB15k237. We also reimplement a 1000dimension TransE and do ablation study on it. From the table, We can find that selfadversarial sampling boosts the performance for both models, while negative sampling loss is only effective on RotatE; in addition, our reimplementation of TransE also outperforms all the stateoftheart models on FB15k237.
RotatE  TransE  
MR  MRR  H@1  H@3  H@10  MR  MRR  H@1  H@3  H@10  
negative sampling loss  
w/ adv  177  .338  .241  .375  .533  170  .332  .233  .372  .531 
w/o adv  185  .297  .205  .328  .480  175  .297  .202  .331  .486 
marginbased ranking criterion  
w/ adv  225  .322  .225  .358  .516  167  .333  .237  .370  .522 
w/o adv  199  .293  .202  .324  .476  164  .306  .212  .340  .493 
FB15k  WN18  FB15k237  WN18RR  

MRR 
The average and variance of the MRR results of RotatE on FB15k, WN18, FB15k237 and WN18RR.
Appendix J Variance of the Results
In Table 14, We provide the average and variance of the MRR results on FB15k, WN18, FB15k237 and WN18RR. Both the average and the variance is calculated by three runs of RotatE with difference random seeds. We can find that the performance of RotatE is quite stable for different random initialization.
Comments
There are no comments yet.