1 Introduction
Knowledge graphs such as Wordnet [Miller1995] and Freebase [Bollacker et al.2008] play an important role in AI researches and applications. Recent researches such as query expansion prefer involving knowledge graphs [Bao et al.2014] while some industrial applications such as question answering robots are also powered by knowledge graphs [Fader, Zettlemoyer, and Etzioni2014]
. However, knowledge graphs are symbolic and logical, where numerical machine learning methods could hardly be applied. This disadvantage is one of the most important challenges for the usage of knowledge graph. To provide a general paradigm to support computing on knowledge graph, various knowledge graph embedding methods have been proposed, such as TransE
[Bordes et al.2013], TransH [Wang et al.2014] and TransR [Lin et al.2015].Embedding is a novel approach to address the representation and reasoning problem for knowledge graph. It transforms entities and relations into continuous vector spaces, where knowledge graph completion and knowledge classification can be done. Most commonly, knowledge graph is composed by triples where a head entity , a relation and a tail entity are presented. Among all the proposed embedding approaches, geometrybased methods are an important branch, yielding the stateoftheart predictive performance. More specifically, geometrybased embedding methods represent an entity or a relation as dimensional vector, then define a score function to measure the plausibility of a triple . Such approaches almost follow the same geometric principle and apply the same loss metric but differ in the relation space where a head entity connects to a tail entity .
However, the loss metric in translationbased models is oversimplified. This flaw makes the current embedding methods incompetent to model various and complex entities/relations in knowledge base.
Firstly, due to the inflexibility of loss metric, current translationbased methods apply spherical equipotential hypersurfaces with different plausibilities, where more near to the centre, more plausible the triple is. As illustrated in Fig.1, spherical equipotential hypersurfaces are applied in (a), so it is difficult to identify the matched tail entities from the unmatched ones. As a common sense in knowledge graph, complex relations, such as onetomany, manytoone and manytomany relations, always lead to complex embedding topologies. Though complex embedding situation is an urgent challenge, spherical equipotential hypersurfaces are not flexible enough to characterise the topologies, making current translationbased methods incompetent for this task.
Secondly, because of the oversimplified loss metric, current translationbased methods treat each dimension identically. This observation leads to a flaw illustrated in Fig.2. As each dimension is treated identically in (a)^{1}^{1}1The dash lines indicate the xaxis component of the loss and the yaxis component of the loss ., the incorrect entities are matched, because they are closer than the correct ones, measured by isotropic Euclidean distance. Therefore, we have a good reason to conjecture that a relation could only be affected by several specific dimensions while the other unrelated dimensions would be noisy. Treating all the dimensions identically involves much noises and degrades the performance.
Motivated by these two issues, in this paper, we propose TransA, an embedding method by utilizing an adaptive and flexible metric. First, TransA applies elliptical surfaces instead of spherical surfaces. By this mean, complex embedding topologies induced by complex relations could be represented better. Then, as analysed in “Adaptive Metric Approach”, TransA could be treated as weighting transformed feature dimensions. Thus, the noise from unrelated dimensions is suppressed. We demonstrate our ideas in Fig.1 (b) and Fig.2 (b).
To summarize, TransA takes the adaptive metric ideas for better knowledge representation. Our method effectively models various and complex entities/relations in knowledge base, and outperforms all the stateoftheart baselines with significant improvements in experiments.
The rest of the paper is organized as follows: we survey the related researches and then introduce our approach, along with the theoretical analysis. Next, the experiments are present and at the final part, we summarize our paper.
2 Related Work
We classify prior studies into two lines: one is the translationbased embedding methods and the other includes many other embedding methods.
2.1 TranslationBased Embedding Methods
All the translationbased methods share a common principle , but differ in defining the relationrelated space where a head entity connects to a tail entity . This principle indicates that should be the nearest neighbour of . Hence, the translationbased methods all have the same form of score function that applies Euclidean distance to measure the loss, as follows:
where are the entity embedding vectors projected in the relationspecific space. Note that this branch of methods keeps the stateoftheart performance.

TransE [Bordes et al.2013] lays the entities in the original space, say , .

TransH [Wang et al.2014]
projects the entities into a hyperplane for addressing the issue of complex relation embedding, say
, . 
TransR [Lin et al.2015] transforms the entities by the same matrix to also address the issue of complex relation embedding, as: , .
Projecting entities into different hyperplanes or transforming entities by different matrices allow entities to play different roles under different embedding situations. However, as the “Introduction” argues, these methods are incompetent to model complex knowledge graphs well and particularly perform unsatisfactorily in various and complex entities/relations situation, because of the oversimplified metric.
TransM [Fan et al.2014] precalculates the distinct weight for each training triple to perform better.
2.2 Other Embedding Methods
There are also many other models for knowledge graph embedding.
Unstructured Model (UM). The UM [Bordes et al.2012] is a simplified version of TransE by setting all the relation vectors to zero . Obviously, relation is not considered in this model.
Structured Embedding (SE). The SE model [Bordes et al.2011] applies two relationrelated matrices, one for head and the other for tail. The score function is defined as . According to [Socher et al.2013], this model cannot capture the relationship among entities and relations.
Single Layer Model (SLM).
SLM applies neural network to knowledge graph embedding. The score function is defined as
Note that SLM is a special case of NTN when the zero tensors are applied.
[Collobert and Weston2008] had proposed a similar method but applied this approach into the language model.Semantic Matching Energy (SME). The SME model [Bordes et al.2012] [Bordes et al.2014] attempts to capture the correlations between entities and relations by matrix product and Hadamard product. The score functions are defined as follows:
where and are weight matrices, is the Hadamard product, and
are bias vectors. In some recent work
[Bordes et al.2014], the second form of score function is redefined with 3way tensors instead of matrices.Latent Factor Model (LFM). The LFM [Jenatton et al.2012] uses the secondorder correlations between entities by a quadratic form, defined as .
Neural Tensor Network (NTN). The NTN model [Socher et al.2013] defines an expressive score function for graph embedding to joint the SLM and LFM.
where is a relationspecific linear layer, is the function, is a 3way tensor. However, the high complexity of NTN may degrade its applicability to largescale knowledge bases.
RESCAL. is a collective matrix factorization model as a common embedding method. [Nickel, Tresp, and Kriegel2011] [Nickel, Tresp, and Kriegel2012].
Semantically Smooth Embedding (SSE). [Guo et al.2015] aims at leveraging the geometric structure of embedding space to make entity representations semantically smooth.
[Wang et al.2014] jointly embeds knowledge and texts. [Wang, Wang, and Guo2015] involves the rules into embedding. [Lin, Liu, and Sun2015] considers the paths of knowledge graph into embedding.
3 Adaptive Metric Approach
In this section, we would introduce the adaptive metric approach, TransA, and present the theoretical analysis from two perspectives.
3.1 Adaptive Metric Score Function
As mentioned in “Introduction”, all the translationbased methods obey the same principle , but they differ in the relationspecific spaces where entities are projected into. Thus, such methods share a similar score function.
(1)  
This score function is actually Euclidean metric. The disadvantages of the oversimplified metric have been discussed in “Introduction”. As a consequence, the proposed TransA replaces inflexible Euclidean distance with adaptive Mahalanobis distance of absolute loss, because Mahalanobis distance is more flexible and more adaptive [Wang and Sun2014]. Thus, our score function is as follows:
(2) 
where and is a relationspecific symmetric nonnegative weight matrix that corresponds to the adaptive metric. Different from the traditional score functions, we take the absolute value, since we want to measure the absolute loss between and . Furthermore, we would list two main reasons for the applied absolute operator.
On one hand, the absolute operator makes the score function as a welldefined norm only under the condition that all the entries of are nonnegative. A welldefined norm is necessary for most metric learning scenes [Kulis2012], and the nonnegative condition could be achieved more easily than PSD, so it generalises the common metric learning algebraic form for better rendering the knowledge topologies. Expanding our score function as an induced norm where . Obviously, is nonnegative, identical and absolute homogeneous. Besides with the easytoverified inequality , the triangle inequality is hold. Totally, absolute operators make the metric a norm with an easytoachieve condition, helping to generalise the representation ability.
On the other hand, in geometry, negative or positive values indicate the downward or upward direction, while in our approach, we do not consider this factor. Let’s see an instance as shown in Fig.2. For the entity , the xaxis component of its loss vector is negative, thus enlarging this component would make the overall loss smaller, while this case is supposed to make the overall loss larger. As a result, absolute operator is critical to our approach. For a numerical example without absolute operator, when the embedding dimension is two, weight matrix is [0 1; 1 0] and the loss vector , the overall loss would be . If and , much absolute larger would reduce the overall loss and this is not desired.
3.2 Perspective from Equipotential Surfaces
TransA shares almost the same geometric explanations with other translationbased methods, but they differ in the loss metric. For other translationbased methods, the equipotential hypersurfaces are spheres as the Euclidean distance defines:
(3) 
where means the threshold or the equipotential value. However, for TransA, the equipotential hypersurfaces are elliptical surfaces as the Mahalanobis distance of absolute loss states [Kulis2012]:
(4) 
Note that the elliptical hypersurfaces would be distorted a bit as the absolute operator applied, but this makes no difference for analysing the performance of TransA. As we know, different equipotential hypersurfaces correspond to different thresholds and different thresholds decide whether the triples are correct or not. Due to the practical situation that our knowledge base is largescale and very complex, the topologies of embedding cannot be distributed as uniform as spheres, justified by Fig.1. Thus, replacing the spherical equipotential hypersurfaces with the elliptical ones would enhance the embedding.
As Fig.1 illustrated, TransA would perform better for onetomany relations. The metric of TransA is symmetric, so it is reasonable that TransA would also perform better for manytoone relations. Moreover, a manytomany relation could be treated as both a manytoone and a onetomany relation. Generally, TransA would perform better for all the complex relations.
3.3 Perspective from Feature Weighting
TransA could be regarded as weighting transformed features. For weight matrix that is symmetric, we obtain the equivalent unique form by [Golub and Van Loan2012] as follows:
(5)  
(6) 
In above equations, can be viewed as a transformation matrix, which transforms the loss vector to another space. Furthermore, is a diagonal matrix and different embedding dimensions are weighted by .
As analysed in “Introduction”, a relation could only be affected by several specific dimensions while the other dimensions would be noisy. Treating different dimensions identically in current translationbased methods can hardly suppress the noise, consequently working out an unsatisfactory performance. We believe that different dimensions play different roles, particularly when entities are distributed divergently. Unlike existing methods, TransA can automatically learn the weights from the data. This may explain why TransA outperforms TransR although both TransA and TransR transform the entity space with matrices.
3.4 Connection to Previous Works
Regarding TransR that rotates and scales the embedding spaces, TransA holds two advantages against it. Firstly, we weight feature dimensions to avoid the noise. Secondly, we loosen the PSD condition for a flexible representation. Regarding TransM that weights feature dimensions using precomputed coefficients, TransA holds two advantages against it. Firstly, we learn the weights from the data, which makes the score function more adaptive. Secondly, we apply the feature transformation that makes the embedding more effective.
3.5 Training Algorithm
To train the model, we use the marginbased ranking error. Taking other constraints into account, the target function can be defined as follows:
(7) 
where , is the set of golden triples and is the set of incorrect ones, is the margin that separates the positive and negative triples. is the Fnorm of matrix. controls the scaling degree, and controls the regularization of adaptive weight matrix. The means the set of entities and the means the set of relations. At each round of training process, could be worked out directly by setting the derivation to zero. Then, in order to ensure the nonnegative condition of , we set all the negative entries of to zero.
(8)  
As to the complexity of our model, the weight matrix is completely calculated by the existing embedding vectors, which means TransA almost has the same free parameter number as TransE. As to the efficiency of our model, the weight matrix has a closed solution, which speeds up the training process to a large extent.
4 Experiments
We evaluate the proposed model on two benchmark tasks: link prediction and triples classification. Experiments are conducted on four public datasets that are the subsets of Wordnet and Freebase. The statistics of these datasets are listed in Tab.1.
ATPE is short for “Averaged Triple number Per Entity”. This quantity measures the diversity and complexity of datasets. Commonly, more triples lead to more complex structures of knowledge graph. To express the more complex structures, entities would be distributed variously and complexly. Overall, embedding methods produce less satisfactory results in the datasets with higher ATPE, because a large ATPE means a various and complex entities/relations embedding situation.
Data  WN18  FB15K  WN11  FB13 
#Rel  18  1,345  11  13 
#Ent  40,943  14,951  38,696  75,043 
#Train  141,442  483,142  112,581  316,232 
#Valid  5,000  50,000  2,609  5,908 
#Test  5,000  59,071  10,544  23,733 
ATPE ^{2}^{2}2ATPE:Averaged Triple number Per Entity. Triples are summed up from all the #Train, #Valid and #Test.  3.70  39.61  3.25  4.61 
4.1 Link Prediction
Datasets  WN18  FB15K  
Metric  Mean Rank  HITS@10(%)  Mean Rank  HITS@10(%)  
Raw  Filter  Raw  Filter  Raw  Filter  Raw  Filter  
SE [Bordes et al.2011]  1,011  985  68.5  80.5  273  162  28.8  39.8 
SME [Bordes et al.2012]  545  533  65.1  74.1  274  154  30.7  40.8 
LFM [Jenatton et al.2012]  469  456  71.4  81.6  283  164  26.0  33.1 
TransE [Bordes et al.2013]  263  251  75.4  89.2  243  125  34.9  47.1 
TransH [Wang et al.2014]  401  388  73.0  82.3  212  87  45.7  64.4 
TransR [Lin et al.2015]  238  225  79.8  92.0  198  77  48.2  68.7 
Adaptive Metric (PSD)  289  278  77.6  89.6  172  88  52.4  74.2 
TransA  405  392  82.3  94.3  155  74  56.1  80.4 
Link prediction aims to predict a missing entity given the other entity and the relation. In this task, we predict given , or predict given . The WN18 and FB15K datasets are the benchmark datasets for this task.
Evaluation Protocol. We follow the same protocol as used in TransE [Bordes et al.2013], TransH [Wang et al.2014] and TransR [Lin et al.2015]. For each testing triple , we replace the tail by every entity in the knowledge graph and calculate a dissimilarity score with the score function for the corrupted triple . Ranking these scores in ascending order, we then get the rank of the original correct triple. There are two metrics for evaluation: the averaged rank (Mean Rank) and the proportion of testing triples, whose ranks are not larger than 10 (HITS@10). This is called “Raw” setting. When we filter out the corrupted triples that exist in all the training, validation and test datasets, this is the“Filter” setting. If a corrupted triple exists in the knowledge graph, ranking it before the original triple is acceptable. To eliminate this issue, the “Filter” setting is more preferred. In both settings, a lower Mean Rank or a higher HITS@10 is better.
Implementation. As the datasets are the same, we directly copy the experimental results of several baselines from the literature, as in [Bordes et al.2013], [Wang et al.2014] and [Lin et al.2015]. We have tried several settings on the validation dataset to get the best configuration for both Adaptive Metric (PSD) and TransA. Under the “bern.” sampling strategy, the optimal configurations are: learning rate , embedding dimension , , on WN18; , , , and on FB15K.
Tasks  Predicting Head(HITS@10)  Predicting Tail(HITS@10)  
Relation Category  11  1N  N1  NN  11  1N  N1  NN 
SE [Bordes et al.2011]  35.6  62.6  17.2  37.5  34.9  14.6  68.3  41.3 
SME [Bordes et al.2012]  35.1  53.7  19.0  40.3  32.7  14.9  61.6  43.3 
TransE [Bordes et al.2013]  43.7  65.7  18.2  47.2  43.7  19.7  66.7  50.0 
TransH [Wang et al.2014]  66.8  87.6  28.7  64.5  65.5  39.8  83.3  67.2 
TransR [Lin et al.2015]  78.8  89.2  34.1  69.2  79.2  37.4  90.4  72.1 
TransA  86.8  95.4  42.7  77.8  86.7  54.3  94.4  80.6 
Results. Evaluation results on WN18 and FB15K are reported in Tab.2 and Tab.3, respectively. We can conclude that:

TransA outperforms all the baselines significantly and consistently. This result justifies the effectiveness of TransA.

FB15K is a very various and complex entities/relations embedding situation, because its ATPE is absolutely highest among all the datasets. However, TransA performs better than other baselines on this dataset, indicating that TransA performs better in various and complex entities/relations embedding situation. WN18 may be less complex than FB15K because of a smaller ATPE. Compared to TransE, the relative improvement of TransA on WN18 is 5.7% while that on FB15K is 95.2%. This comparison shows TransA has more advantages in the various and complex embedding environment.

TransA promotes the performance for 11 relations, which means TransA generally promotes the performance on simple relations. TransA also promotes the performance for 1N, N1, NN relations^{3}^{3}3Mapping properties of relations follow the same rules in [Bordes et al.2013]., which demonstrates TransA works better for complex relation embedding.

Compared to TransR, better performance of TransA means the feature weighting and the generalised metric form leaded by absolute operators, have significant benefits, as analysed.

Compared to Adaptive Metric (PSD) which applies the score function and constrains as PSD, TransA is more competent, because our score function with nonnegative matrix condition and absolute operator produces a more flexible representation than that with PSD matrix condition does, as analysed in “Adaptive Metric Approach”.

TransA performs bad in Mean Rank on WN18 dataset. Digging into the detailed situation, we discover there are 27 testing triples (0.54% of the testing set) whose ranks are more than 30,000, and these few cases would make about 162 mean rank loss. The tail or head entity of all these triples have never been cooccurring with the corresponding relation in the training set. It is the insufficient training data that leads to the overdistorted weight matrix and the overdistorted weight matrix is responsible for the bad Mean Rank.
4.2 Triples Classification
Triples classification is a classical task in knowledge base embedding, which aims at predicting whether a given triple is correct or not. Our evaluation protocol is the same as prior studies. Besides, WN11 and FB13 are the benchmark datasets for this task. Evaluation of classification needs negative labels. The datasets have already been built with negative triples, where each correct triple is corrupted to get one negative triple.
Evaluation Protocol. The decision rule is as follows: for a triple , if is below a threshold , then positive; otherwise negative. The thresholds are determined on the validation dataset. The final accuracy is based on how many triples are classified correctly.
Implementation. As all methods use the same datasets, we directly copy the results of different methods from the literature. We have tried several settings on the validation dataset to get the best configuration for both Adaptive Metric (PSD) and TransA. The optimal configurations are: “bern” sampling, , , , on WN11, and “bern” sampling, , , , on FB13.
Methods  WN11  FB13  Avg. 

LFM  73.8  84.3  79.0 
NTN  70.4  87.1  78.8 
TransE  75.9  81.5  78.7 
TransH  78.8  83.3  81.1 
TransR  85.9  82.5  84.2 
Adaptive Metric (PSD)  81.4  87.1  84.3 
TransA  83.2  87.3  85.3 
Results. Accuracies are reported in Tab.4 and Fig.3. According to “Adaptive Metric Approach” section, we could work out the weights by for each relation. Because the minimal weight is too small to make a significant analysis, we choose the median one to represent relative small weight. Thus, “Weight Difference” is calculated by . Bigger the weight difference is, more significant effect, the feature weighting makes. Notably, scaling by the median weight makes the weight differences comparable to each other. We observe that:

Overall, TransA yields the best average accuracy, illustrating the effectiveness of TransA.

Accuracies vary with the weight difference, meaning the feature weighting benefits the accuracies. This proves the theoretical analysis and the effectiveness of TransA.

Compared to Adaptive Metric (PSD) , TransA performs better, because our score function with nonnegative matrix condition and absolute operator leads to a more flexible representation than that with PSD matrix condition does.
5 Conclusion
In this paper, we propose TransA, a translationbased knowledge graph embedding method with an adaptive and flexible metric. TransA applies elliptical equipotential hypersurfaces to characterise the embedding topologies and weights several specific feature dimensions for a relation to avoid much noise. Thus, our adaptive metric approach could effectively model various and complex entities/relations in knowledge base. Experiments are conducted with two benchmark tasks and the results show TransA achieves consistent and significant improvements over the current stateoftheart baselines. To reproduce our results, our codes and data will be published in github.
References
 [Bao et al.2014] Bao, J.; Duan, N.; Zhou, M.; and Zhao, T. 2014. Knowledgebased question answering as machine translation. Cell 2:6.
 [Bollacker et al.2008] Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, 1247–1250. ACM.

[Bordes et al.2011]
Bordes, A.; Weston, J.; Collobert, R.; Bengio, Y.; et al.
2011.
Learning structured embeddings of knowledge bases.
In
Proceedings of the Twentyfifth AAAI Conference on Artificial Intelligence
.  [Bordes et al.2012] Bordes, A.; Glorot, X.; Weston, J.; and Bengio, Y. 2012. Joint learning of words and meaning representations for opentext semantic parsing. In International Conference on Artificial Intelligence and Statistics, 127–135.
 [Bordes et al.2013] Bordes, A.; Usunier, N.; GarciaDuran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems, 2787–2795.
 [Bordes et al.2014] Bordes, A.; Glorot, X.; Weston, J.; and Bengio, Y. 2014. A semantic matching energy function for learning with multirelational data. Machine Learning 94(2):233–259.

[Collobert and
Weston2008]
Collobert, R., and Weston, J.
2008.
A unified architecture for natural language processing: Deep neural networks with multitask learning.
In Proceedings of the 25th international conference on Machine learning, 160–167. ACM.  [Fader, Zettlemoyer, and Etzioni2014] Fader, A.; Zettlemoyer, L.; and Etzioni, O. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1156–1165. ACM.
 [Fan et al.2014] Fan, M.; Zhou, Q.; Chang, E.; and Zheng, T. F. 2014. Transitionbased knowledge graph embedding with relational mapping properties. In Proceedings of the 28th Pacific Asia Conference on Language, Information, and Computation, 328–337.
 [Golub and Van Loan2012] Golub, G. H., and Van Loan, C. F. 2012. Matrix computations, volume 3. JHU Press.
 [Guo et al.2015] Guo, S.; Wang, Q.; Wang, B.; Wang, L.; and Guo, L. 2015. Semantically smooth knowledge graph embedding. In Proceedings of ACL.
 [Jenatton et al.2012] Jenatton, R.; Roux, N. L.; Bordes, A.; and Obozinski, G. R. 2012. A latent factor model for highly multirelational data. In Advances in Neural Information Processing Systems, 3167–3175.
 [Kulis2012] Kulis, B. 2012. Metric learning: A survey. Foundations & Trends in Machine Learning 5(4):287–364.
 [Lin et al.2015] Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; and Zhu, X. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the TwentyNinth AAAI Conference on Artificial Intelligence.
 [Lin, Liu, and Sun2015] Lin, Y.; Liu, Z.; and Sun, M. 2015. Modeling relation paths for representation learning of knowledge bases. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.
 [Miller1995] Miller, G. A. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39–41.
 [Nickel, Tresp, and Kriegel2011] Nickel, M.; Tresp, V.; and Kriegel, H.P. 2011. A threeway model for collective learning on multirelational data. In Proceedings of the 28th international conference on machine learning (ICML11), 809–816.
 [Nickel, Tresp, and Kriegel2012] Nickel, M.; Tresp, V.; and Kriegel, H.P. 2012. Factorizing yago: scalable machine learning for linked data. In Proceedings of the 21st international conference on World Wide Web, 271–280. ACM.
 [Socher et al.2013] Socher, R.; Chen, D.; Manning, C. D.; and Ng, A. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, 926–934.
 [Wang and Sun2014] Wang, F., and Sun, J. 2014. Survey on distance metric learning and dimensionality reduction in data mining. Data Mining and Knowledge Discovery 1–31.
 [Wang et al.2014] Wang, Z.; Zhang, J.; Feng, J.; and Chen, Z. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the TwentyEighth AAAI Conference on Artificial Intelligence, 1112–1119.
 [Wang, Wang, and Guo2015] Wang, Q.; Wang, B.; and Guo, L. 2015. Knowledge base completion using embeddings and rules. In Proceedings of the 24th International Joint Conference on Artificial Intelligence.
Comments
There are no comments yet.