Knowledge graphs such as Wordnet [Miller1995] and Freebase [Bollacker et al.2008] play an important role in AI researches and applications. Recent researches such as query expansion prefer involving knowledge graphs [Bao et al.2014] while some industrial applications such as question answering robots are also powered by knowledge graphs [Fader, Zettlemoyer, and Etzioni2014]
. However, knowledge graphs are symbolic and logical, where numerical machine learning methods could hardly be applied. This disadvantage is one of the most important challenges for the usage of knowledge graph. To provide a general paradigm to support computing on knowledge graph, various knowledge graph embedding methods have been proposed, such as TransE[Bordes et al.2013], TransH [Wang et al.2014] and TransR [Lin et al.2015].
Embedding is a novel approach to address the representation and reasoning problem for knowledge graph. It transforms entities and relations into continuous vector spaces, where knowledge graph completion and knowledge classification can be done. Most commonly, knowledge graph is composed by triples where a head entity , a relation and a tail entity are presented. Among all the proposed embedding approaches, geometry-based methods are an important branch, yielding the state-of-the-art predictive performance. More specifically, geometry-based embedding methods represent an entity or a relation as -dimensional vector, then define a score function to measure the plausibility of a triple . Such approaches almost follow the same geometric principle and apply the same loss metric but differ in the relation space where a head entity connects to a tail entity .
However, the loss metric in translation-based models is oversimplified. This flaw makes the current embedding methods incompetent to model various and complex entities/relations in knowledge base.
Firstly, due to the inflexibility of loss metric, current translation-based methods apply spherical equipotential hyper-surfaces with different plausibilities, where more near to the centre, more plausible the triple is. As illustrated in Fig.1, spherical equipotential hyper-surfaces are applied in (a), so it is difficult to identify the matched tail entities from the unmatched ones. As a common sense in knowledge graph, complex relations, such as one-to-many, many-to-one and many-to-many relations, always lead to complex embedding topologies. Though complex embedding situation is an urgent challenge, spherical equipotential hyper-surfaces are not flexible enough to characterise the topologies, making current translation-based methods incompetent for this task.
Secondly, because of the oversimplified loss metric, current translation-based methods treat each dimension identically. This observation leads to a flaw illustrated in Fig.2. As each dimension is treated identically in (a)111The dash lines indicate the x-axis component of the loss and the y-axis component of the loss ., the incorrect entities are matched, because they are closer than the correct ones, measured by isotropic Euclidean distance. Therefore, we have a good reason to conjecture that a relation could only be affected by several specific dimensions while the other unrelated dimensions would be noisy. Treating all the dimensions identically involves much noises and degrades the performance.
Motivated by these two issues, in this paper, we propose TransA, an embedding method by utilizing an adaptive and flexible metric. First, TransA applies elliptical surfaces instead of spherical surfaces. By this mean, complex embedding topologies induced by complex relations could be represented better. Then, as analysed in “Adaptive Metric Approach”, TransA could be treated as weighting transformed feature dimensions. Thus, the noise from unrelated dimensions is suppressed. We demonstrate our ideas in Fig.1 (b) and Fig.2 (b).
To summarize, TransA takes the adaptive metric ideas for better knowledge representation. Our method effectively models various and complex entities/relations in knowledge base, and outperforms all the state-of-the-art baselines with significant improvements in experiments.
The rest of the paper is organized as follows: we survey the related researches and then introduce our approach, along with the theoretical analysis. Next, the experiments are present and at the final part, we summarize our paper.
2 Related Work
We classify prior studies into two lines: one is the translation-based embedding methods and the other includes many other embedding methods.
2.1 Translation-Based Embedding Methods
All the translation-based methods share a common principle , but differ in defining the relation-related space where a head entity connects to a tail entity . This principle indicates that should be the nearest neighbour of . Hence, the translation-based methods all have the same form of score function that applies Euclidean distance to measure the loss, as follows:
where are the entity embedding vectors projected in the relation-specific space. Note that this branch of methods keeps the state-of-the-art performance.
Projecting entities into different hyperplanes or transforming entities by different matrices allow entities to play different roles under different embedding situations. However, as the “Introduction” argues, these methods are incompetent to model complex knowledge graphs well and particularly perform unsatisfactorily in various and complex entities/relations situation, because of the oversimplified metric.
TransM [Fan et al.2014] pre-calculates the distinct weight for each training triple to perform better.
2.2 Other Embedding Methods
There are also many other models for knowledge graph embedding.
Unstructured Model (UM). The UM [Bordes et al.2012] is a simplified version of TransE by setting all the relation vectors to zero . Obviously, relation is not considered in this model.
Structured Embedding (SE). The SE model [Bordes et al.2011] applies two relation-related matrices, one for head and the other for tail. The score function is defined as . According to [Socher et al.2013], this model cannot capture the relationship among entities and relations.
Single Layer Model (SLM).
SLM applies neural network to knowledge graph embedding. The score function is defined as
Note that SLM is a special case of NTN when the zero tensors are applied.[Collobert and Weston2008] had proposed a similar method but applied this approach into the language model.
Semantic Matching Energy (SME). The SME model [Bordes et al.2012] [Bordes et al.2014] attempts to capture the correlations between entities and relations by matrix product and Hadamard product. The score functions are defined as follows:
where and are weight matrices, is the Hadamard product, and
are bias vectors. In some recent work[Bordes et al.2014], the second form of score function is re-defined with 3-way tensors instead of matrices.
Latent Factor Model (LFM). The LFM [Jenatton et al.2012] uses the second-order correlations between entities by a quadratic form, defined as .
Neural Tensor Network (NTN). The NTN model [Socher et al.2013] defines an expressive score function for graph embedding to joint the SLM and LFM.
where is a relation-specific linear layer, is the function, is a 3-way tensor. However, the high complexity of NTN may degrade its applicability to large-scale knowledge bases.
Semantically Smooth Embedding (SSE). [Guo et al.2015] aims at leveraging the geometric structure of embedding space to make entity representations semantically smooth.
3 Adaptive Metric Approach
In this section, we would introduce the adaptive metric approach, TransA, and present the theoretical analysis from two perspectives.
3.1 Adaptive Metric Score Function
As mentioned in “Introduction”, all the translation-based methods obey the same principle , but they differ in the relation-specific spaces where entities are projected into. Thus, such methods share a similar score function.
This score function is actually Euclidean metric. The disadvantages of the oversimplified metric have been discussed in “Introduction”. As a consequence, the proposed TransA replaces inflexible Euclidean distance with adaptive Mahalanobis distance of absolute loss, because Mahalanobis distance is more flexible and more adaptive [Wang and Sun2014]. Thus, our score function is as follows:
where and is a relation-specific symmetric non-negative weight matrix that corresponds to the adaptive metric. Different from the traditional score functions, we take the absolute value, since we want to measure the absolute loss between and . Furthermore, we would list two main reasons for the applied absolute operator.
On one hand, the absolute operator makes the score function as a well-defined norm only under the condition that all the entries of are non-negative. A well-defined norm is necessary for most metric learning scenes [Kulis2012], and the non-negative condition could be achieved more easily than PSD, so it generalises the common metric learning algebraic form for better rendering the knowledge topologies. Expanding our score function as an induced norm where . Obviously, is non-negative, identical and absolute homogeneous. Besides with the easy-to-verified inequality , the triangle inequality is hold. Totally, absolute operators make the metric a norm with an easy-to-achieve condition, helping to generalise the representation ability.
On the other hand, in geometry, negative or positive values indicate the downward or upward direction, while in our approach, we do not consider this factor. Let’s see an instance as shown in Fig.2. For the entity , the x-axis component of its loss vector is negative, thus enlarging this component would make the overall loss smaller, while this case is supposed to make the overall loss larger. As a result, absolute operator is critical to our approach. For a numerical example without absolute operator, when the embedding dimension is two, weight matrix is [0 1; 1 0] and the loss vector , the overall loss would be . If and , much absolute larger would reduce the overall loss and this is not desired.
3.2 Perspective from Equipotential Surfaces
TransA shares almost the same geometric explanations with other translation-based methods, but they differ in the loss metric. For other translation-based methods, the equipotential hyper-surfaces are spheres as the Euclidean distance defines:
where means the threshold or the equipotential value. However, for TransA, the equipotential hyper-surfaces are elliptical surfaces as the Mahalanobis distance of absolute loss states [Kulis2012]:
Note that the elliptical hyper-surfaces would be distorted a bit as the absolute operator applied, but this makes no difference for analysing the performance of TransA. As we know, different equipotential hyper-surfaces correspond to different thresholds and different thresholds decide whether the triples are correct or not. Due to the practical situation that our knowledge base is large-scale and very complex, the topologies of embedding cannot be distributed as uniform as spheres, justified by Fig.1. Thus, replacing the spherical equipotential hyper-surfaces with the elliptical ones would enhance the embedding.
As Fig.1 illustrated, TransA would perform better for one-to-many relations. The metric of TransA is symmetric, so it is reasonable that TransA would also perform better for many-to-one relations. Moreover, a many-to-many relation could be treated as both a many-to-one and a one-to-many relation. Generally, TransA would perform better for all the complex relations.
3.3 Perspective from Feature Weighting
TransA could be regarded as weighting transformed features. For weight matrix that is symmetric, we obtain the equivalent unique form by [Golub and Van Loan2012] as follows:
In above equations, can be viewed as a transformation matrix, which transforms the loss vector to another space. Furthermore, is a diagonal matrix and different embedding dimensions are weighted by .
As analysed in “Introduction”, a relation could only be affected by several specific dimensions while the other dimensions would be noisy. Treating different dimensions identically in current translation-based methods can hardly suppress the noise, consequently working out an unsatisfactory performance. We believe that different dimensions play different roles, particularly when entities are distributed divergently. Unlike existing methods, TransA can automatically learn the weights from the data. This may explain why TransA outperforms TransR although both TransA and TransR transform the entity space with matrices.
3.4 Connection to Previous Works
Regarding TransR that rotates and scales the embedding spaces, TransA holds two advantages against it. Firstly, we weight feature dimensions to avoid the noise. Secondly, we loosen the PSD condition for a flexible representation. Regarding TransM that weights feature dimensions using pre-computed coefficients, TransA holds two advantages against it. Firstly, we learn the weights from the data, which makes the score function more adaptive. Secondly, we apply the feature transformation that makes the embedding more effective.
3.5 Training Algorithm
To train the model, we use the margin-based ranking error. Taking other constraints into account, the target function can be defined as follows:
where , is the set of golden triples and is the set of incorrect ones, is the margin that separates the positive and negative triples. is the F-norm of matrix. controls the scaling degree, and controls the regularization of adaptive weight matrix. The means the set of entities and the means the set of relations. At each round of training process, could be worked out directly by setting the derivation to zero. Then, in order to ensure the non-negative condition of , we set all the negative entries of to zero.
As to the complexity of our model, the weight matrix is completely calculated by the existing embedding vectors, which means TransA almost has the same free parameter number as TransE. As to the efficiency of our model, the weight matrix has a closed solution, which speeds up the training process to a large extent.
We evaluate the proposed model on two benchmark tasks: link prediction and triples classification. Experiments are conducted on four public datasets that are the subsets of Wordnet and Freebase. The statistics of these datasets are listed in Tab.1.
ATPE is short for “Averaged Triple number Per Entity”. This quantity measures the diversity and complexity of datasets. Commonly, more triples lead to more complex structures of knowledge graph. To express the more complex structures, entities would be distributed variously and complexly. Overall, embedding methods produce less satisfactory results in the datasets with higher ATPE, because a large ATPE means a various and complex entities/relations embedding situation.
|ATPE 222ATPE:Averaged Triple number Per Entity. Triples are summed up from all the #Train, #Valid and #Test.||3.70||39.61||3.25||4.61|
4.1 Link Prediction
|Metric||Mean Rank||HITS@10(%)||Mean Rank||HITS@10(%)|
|SE [Bordes et al.2011]||1,011||985||68.5||80.5||273||162||28.8||39.8|
|SME [Bordes et al.2012]||545||533||65.1||74.1||274||154||30.7||40.8|
|LFM [Jenatton et al.2012]||469||456||71.4||81.6||283||164||26.0||33.1|
|TransE [Bordes et al.2013]||263||251||75.4||89.2||243||125||34.9||47.1|
|TransH [Wang et al.2014]||401||388||73.0||82.3||212||87||45.7||64.4|
|TransR [Lin et al.2015]||238||225||79.8||92.0||198||77||48.2||68.7|
|Adaptive Metric (PSD)||289||278||77.6||89.6||172||88||52.4||74.2|
Link prediction aims to predict a missing entity given the other entity and the relation. In this task, we predict given , or predict given . The WN18 and FB15K datasets are the benchmark datasets for this task.
Evaluation Protocol. We follow the same protocol as used in TransE [Bordes et al.2013], TransH [Wang et al.2014] and TransR [Lin et al.2015]. For each testing triple , we replace the tail by every entity in the knowledge graph and calculate a dissimilarity score with the score function for the corrupted triple . Ranking these scores in ascending order, we then get the rank of the original correct triple. There are two metrics for evaluation: the averaged rank (Mean Rank) and the proportion of testing triples, whose ranks are not larger than 10 (HITS@10). This is called “Raw” setting. When we filter out the corrupted triples that exist in all the training, validation and test datasets, this is the“Filter” setting. If a corrupted triple exists in the knowledge graph, ranking it before the original triple is acceptable. To eliminate this issue, the “Filter” setting is more preferred. In both settings, a lower Mean Rank or a higher HITS@10 is better.
Implementation. As the datasets are the same, we directly copy the experimental results of several baselines from the literature, as in [Bordes et al.2013], [Wang et al.2014] and [Lin et al.2015]. We have tried several settings on the validation dataset to get the best configuration for both Adaptive Metric (PSD) and TransA. Under the “bern.” sampling strategy, the optimal configurations are: learning rate , embedding dimension , , on WN18; , , , and on FB15K.
|Tasks||Predicting Head(HITS@10)||Predicting Tail(HITS@10)|
|SE [Bordes et al.2011]||35.6||62.6||17.2||37.5||34.9||14.6||68.3||41.3|
|SME [Bordes et al.2012]||35.1||53.7||19.0||40.3||32.7||14.9||61.6||43.3|
|TransE [Bordes et al.2013]||43.7||65.7||18.2||47.2||43.7||19.7||66.7||50.0|
|TransH [Wang et al.2014]||66.8||87.6||28.7||64.5||65.5||39.8||83.3||67.2|
|TransR [Lin et al.2015]||78.8||89.2||34.1||69.2||79.2||37.4||90.4||72.1|
TransA outperforms all the baselines significantly and consistently. This result justifies the effectiveness of TransA.
FB15K is a very various and complex entities/relations embedding situation, because its ATPE is absolutely highest among all the datasets. However, TransA performs better than other baselines on this dataset, indicating that TransA performs better in various and complex entities/relations embedding situation. WN18 may be less complex than FB15K because of a smaller ATPE. Compared to TransE, the relative improvement of TransA on WN18 is 5.7% while that on FB15K is 95.2%. This comparison shows TransA has more advantages in the various and complex embedding environment.
TransA promotes the performance for 1-1 relations, which means TransA generally promotes the performance on simple relations. TransA also promotes the performance for 1-N, N-1, N-N relations333Mapping properties of relations follow the same rules in [Bordes et al.2013]., which demonstrates TransA works better for complex relation embedding.
Compared to TransR, better performance of TransA means the feature weighting and the generalised metric form leaded by absolute operators, have significant benefits, as analysed.
Compared to Adaptive Metric (PSD) which applies the score function and constrains as PSD, TransA is more competent, because our score function with non-negative matrix condition and absolute operator produces a more flexible representation than that with PSD matrix condition does, as analysed in “Adaptive Metric Approach”.
TransA performs bad in Mean Rank on WN18 dataset. Digging into the detailed situation, we discover there are 27 testing triples (0.54% of the testing set) whose ranks are more than 30,000, and these few cases would make about 162 mean rank loss. The tail or head entity of all these triples have never been co-occurring with the corresponding relation in the training set. It is the insufficient training data that leads to the over-distorted weight matrix and the over-distorted weight matrix is responsible for the bad Mean Rank.
4.2 Triples Classification
Triples classification is a classical task in knowledge base embedding, which aims at predicting whether a given triple is correct or not. Our evaluation protocol is the same as prior studies. Besides, WN11 and FB13 are the benchmark datasets for this task. Evaluation of classification needs negative labels. The datasets have already been built with negative triples, where each correct triple is corrupted to get one negative triple.
Evaluation Protocol. The decision rule is as follows: for a triple , if is below a threshold , then positive; otherwise negative. The thresholds are determined on the validation dataset. The final accuracy is based on how many triples are classified correctly.
Implementation. As all methods use the same datasets, we directly copy the results of different methods from the literature. We have tried several settings on the validation dataset to get the best configuration for both Adaptive Metric (PSD) and TransA. The optimal configurations are: “bern” sampling, , , , on WN11, and “bern” sampling, , , , on FB13.
|Adaptive Metric (PSD)||81.4||87.1||84.3|
Results. Accuracies are reported in Tab.4 and Fig.3. According to “Adaptive Metric Approach” section, we could work out the weights by for each relation. Because the minimal weight is too small to make a significant analysis, we choose the median one to represent relative small weight. Thus, “Weight Difference” is calculated by . Bigger the weight difference is, more significant effect, the feature weighting makes. Notably, scaling by the median weight makes the weight differences comparable to each other. We observe that:
Overall, TransA yields the best average accuracy, illustrating the effectiveness of TransA.
Accuracies vary with the weight difference, meaning the feature weighting benefits the accuracies. This proves the theoretical analysis and the effectiveness of TransA.
Compared to Adaptive Metric (PSD) , TransA performs better, because our score function with non-negative matrix condition and absolute operator leads to a more flexible representation than that with PSD matrix condition does.
In this paper, we propose TransA, a translation-based knowledge graph embedding method with an adaptive and flexible metric. TransA applies elliptical equipotential hyper-surfaces to characterise the embedding topologies and weights several specific feature dimensions for a relation to avoid much noise. Thus, our adaptive metric approach could effectively model various and complex entities/relations in knowledge base. Experiments are conducted with two benchmark tasks and the results show TransA achieves consistent and significant improvements over the current state-of-the-art baselines. To reproduce our results, our codes and data will be published in github.
- [Bao et al.2014] Bao, J.; Duan, N.; Zhou, M.; and Zhao, T. 2014. Knowledge-based question answering as machine translation. Cell 2:6.
- [Bollacker et al.2008] Bollacker, K.; Evans, C.; Paritosh, P.; Sturge, T.; and Taylor, J. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, 1247–1250. ACM.
[Bordes et al.2011]
Bordes, A.; Weston, J.; Collobert, R.; Bengio, Y.; et al.
Learning structured embeddings of knowledge bases.
Proceedings of the Twenty-fifth AAAI Conference on Artificial Intelligence.
- [Bordes et al.2012] Bordes, A.; Glorot, X.; Weston, J.; and Bengio, Y. 2012. Joint learning of words and meaning representations for open-text semantic parsing. In International Conference on Artificial Intelligence and Statistics, 127–135.
- [Bordes et al.2013] Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems, 2787–2795.
- [Bordes et al.2014] Bordes, A.; Glorot, X.; Weston, J.; and Bengio, Y. 2014. A semantic matching energy function for learning with multi-relational data. Machine Learning 94(2):233–259.
Collobert, R., and Weston, J.
A unified architecture for natural language processing: Deep neural networks with multitask learning.In Proceedings of the 25th international conference on Machine learning, 160–167. ACM.
- [Fader, Zettlemoyer, and Etzioni2014] Fader, A.; Zettlemoyer, L.; and Etzioni, O. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1156–1165. ACM.
- [Fan et al.2014] Fan, M.; Zhou, Q.; Chang, E.; and Zheng, T. F. 2014. Transition-based knowledge graph embedding with relational mapping properties. In Proceedings of the 28th Pacific Asia Conference on Language, Information, and Computation, 328–337.
- [Golub and Van Loan2012] Golub, G. H., and Van Loan, C. F. 2012. Matrix computations, volume 3. JHU Press.
- [Guo et al.2015] Guo, S.; Wang, Q.; Wang, B.; Wang, L.; and Guo, L. 2015. Semantically smooth knowledge graph embedding. In Proceedings of ACL.
- [Jenatton et al.2012] Jenatton, R.; Roux, N. L.; Bordes, A.; and Obozinski, G. R. 2012. A latent factor model for highly multi-relational data. In Advances in Neural Information Processing Systems, 3167–3175.
- [Kulis2012] Kulis, B. 2012. Metric learning: A survey. Foundations & Trends in Machine Learning 5(4):287–364.
- [Lin et al.2015] Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; and Zhu, X. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence.
- [Lin, Liu, and Sun2015] Lin, Y.; Liu, Z.; and Sun, M. 2015. Modeling relation paths for representation learning of knowledge bases. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.
- [Miller1995] Miller, G. A. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39–41.
- [Nickel, Tresp, and Kriegel2011] Nickel, M.; Tresp, V.; and Kriegel, H.-P. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th international conference on machine learning (ICML-11), 809–816.
- [Nickel, Tresp, and Kriegel2012] Nickel, M.; Tresp, V.; and Kriegel, H.-P. 2012. Factorizing yago: scalable machine learning for linked data. In Proceedings of the 21st international conference on World Wide Web, 271–280. ACM.
- [Socher et al.2013] Socher, R.; Chen, D.; Manning, C. D.; and Ng, A. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, 926–934.
- [Wang and Sun2014] Wang, F., and Sun, J. 2014. Survey on distance metric learning and dimensionality reduction in data mining. Data Mining and Knowledge Discovery 1–31.
- [Wang et al.2014] Wang, Z.; Zhang, J.; Feng, J.; and Chen, Z. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, 1112–1119.
- [Wang, Wang, and Guo2015] Wang, Q.; Wang, B.; and Guo, L. 2015. Knowledge base completion using embeddings and rules. In Proceedings of the 24th International Joint Conference on Artificial Intelligence.