Learning representations for Knowledge bases (KBs) such as Freebase (Bollacker et al., 2008)
has become a core problem in machine learning, with a large variety of applications from question answering(Yao and Van Durme, 2014) to image classification (Deng et al., 2014). Many approaches have been proposed to learn these representations, or embeddings, with either single relational (Hoff et al., 2002; Perozzi et al., 2014) or multi-relational data (Nickel et al., 2011; Bordes et al., 2013).
, Instead, we frame this problem in a multiclass multilabel classification problem and model only the co-occurences of entities and relations with a linear classifier based on a Bag-of-Words (BoW) representation and standard cost functions. In practice, this approach works surprisingly well on a variety of standard datasets, obtaining performance competitive with the state-of-the-art approaches while using a standard text library (i.e.,fastText) and running in a few minutes (Joulin et al., 2017).
We focus our study on two standard approaches to learn representations for KBs: knowledge base completion and question answering. For KB completion, our conclusions extend those of Kadlec et al. (2017), that simple models like TransE (Bordes et al., 2013) work as well, if not better than more sophisticated ones, if tuned properly. Kadlec et al. (2017) focus on a bilinear model designed for KB completion, DistMul (Yang et al., 2014), that still takes a few hours to train on a high-end GPU. We show that similar performance can be achieved with a linear classifier and a training time reduced to a few minutes. For question answering, we consider datasets where we have guarantees that the question answer pairs are covered by the graph in one hop to indirectly learn graph embeddings (Bordes et al., 2015; Miller et al., 2016). Following Bordes et al. (2014a), we predict the relation between the entities appearing in the question and answer pairs to learn embeddings of the graph edges. The embeddings of the entities, or nodes, are indirectly learned by embedding the questions. In this setting, we achieve competitive performance as long as we have access to a clean KB related to the question answering task.
2.1 fastText model
Linear models (Joachims, 1998) are powerful and efficient baselines for text classification. In particular, the fastText model proposed by Joulin et al. (2017) achieves state-of-the-art performance on many datasets by combining several standard tricks, such as low rank constraints (Schutze, 1992)
and n-gram features(Wang and Manning, 2012). The same approach can be applied to any problem where the input is a set of discrete tokens. For example, a KB is composed of entities (or nodes) and relations (or edges) that can be represented by a unique discrete token.
The model is composed of a matrix which is used as a look-up table over the discrete tokens and a matrix for the classifier. The representations of the discrete tokens are averaged into BoW representation, which is in turn fed to the linear classifier. Using a function
to compute the probability distribution over the classes, andinput sets for discrete token (e.g., sentences), leads to minimize:
where is the normalized BoW of the -th input set, the label. While BoW models are memory inefficient, their memory footprint can be significantly reduced (Joulin et al., 2016a). The model is trained asynchronously on multiple CPUs with SGD and a linearly decaying learning rate.
|TransE (Bordes et al., 2013)||75.4||89.2||34.9||47.1|
|Rescal (Nickel et al., 2012)||-||92.8||-||58.7|
|Fast-TransR (Lin et al., 2015)||81.0||94.6||48.8||69.8|
|HolE (Nickel et al., 2016)||-||94.9||-||73.9|
|TransE++ (Nickel et al., 2016)||-||94.3||-||74.9|
|Fast-TransD (Lin et al., 2015)||78.5||91.9||49.9||75.2|
|ReverseModel (Dettmers et al., 2017)||-||96.9||-||78.6|
|HolE+Neg-LL (Trouillon and Nickel, 2017)||-||94.7||-||82.5|
|Complex (Trouillon et al., 2017)||-||94.7||-||84.0|
|R-GCN (Schlichtkrull et al., 2017)||-||96.4||-||84.2|
|ConvE (Dettmers et al., 2017)||-||95.5||-||87.3|
|DistMul (Kadlec et al., 2017)||-||94.6||-||89.3|
|Ensemble DistMul (Kadlec et al., 2017)||-||95.0||-||90.4|
|IRN (Shen et al., 2016)||-||95.3||-||92.7|
|fastText - train||80.6||94.9||52.3||86.5|
|fastText - train+valid||83.2||97.6||53.4||89.9|
2.2 Loss functions
We consider two loss functions in our experiments: the softmax function and a one-versus-all loss function with negative sampling.
Given classes, and a score for each class , the softmax function is defined as This function requires the score of every class, leading to a complexity of where is the size of the embeddings. This function is often used to compute the probability distribution of a finite set of discrete classes.
Computing the softmax function over a large number of classes is computationally prohibitive. We replace it by an independent binary classifier per class, i.e., a set of one-versus-all losses. During training, for each positive example, we draw randomly negative classes, and update the classifiers. The number is significantly smaller than , reducing the complexity from to . This loss has been used for word embeddings (Mikolov et al., 2013; Bojanowski et al., 2017) as well as to object classification (Joulin et al., 2016b).
2.3 Knowledge base completion
A knowledge base is represented as a set of subject-relation-object triplets . Typically, the entity is predicted according to the subject and the relation . With the notations of the fastText model described in Sec. 2.1, each entity
is associated with a vectorand each relation with a vector of the same dimension . The target entity is also represented by a dimensional vector . The scoring function for a triplet is simply the dot product between the BoW representation of the input pair and the target:
This scoring function does not define a relational model, it only captures co-occurence between entities and relations. Additionally, it makes no assumption about the direction of the relation, i.e., the same relation embedding is used to predict both ends of a triplet. To circumvent this problem, we encode the direction in the relation embedding by associating a relation with two embeddings, one to predict the subject and one to predict the object. While our approach shares many similarities with TransE (Bordes et al., 2013), it differs in several aspects: they use a ranking loss, their scoring function is an distance, and they have one embedding per entity. Similarly, if the goal is to predict the relation between a pair of entities, our scoring function is: As for entity prediction, we circumvent the symmetry between subject and relation by associating each entity with two embeddings, one if the entity is the subject or the object of a triplet.
2.4 Question answering
Question answering problems can be used to learn graph embeddings if framed as edge prediction problems between entities appearing in the question answer pairs (Bordes et al., 2014a). The question is represented as a bag of words and the potential relations are labels. An entity is indirectly represented by the associated words in the question.
String matching for entity linking.
The questions and answers are matched to entities in the KB with a string matching algorithm (Bordes et al., 2014a), using a look-up table between entities and their string representations. Every pair of question and answer in the training set is thus matched to a set of potential pairs of entities. Several entities are often matched to a question and we use an ad-hoc euristic to sort them, i.e., using the inverse of their frequency in the training set, and the size of their associated strings in case of ties (to approximate the frequency).
Relation prediction for question answering.
Once a question-answer pair is associated with a set of pairs of entities, candidate relations are extracted. Following Bordes et al. (2014a), we consider the relations as labels and use fastText to predict them. At test time, the answer to a question is inferred by taking the most likely relation and verify if any of the entities matched to the question forms a valid pair in the KB. If not, we move to the next most likely relation and reiterate the process.
3.1 Knowledge base completion.
We use several standard benchmarks for KB completion:
The WN18 dataset is a subset of WordNet, containing 40,943 entities, 18 relation types, and 151,442 triples. WordNet is a KB built by grouping synonym words and provides lexical relationships between them.
The FB15k dataset is a subset of Freebase, containing 14,951 entities, 1345 relation types, and 592,213 triples. Freebase is a large KB containing general facts about the world.
The FB15k-237 dataset that is a subset of FB15k with no reversible relations (Toutanova et al., 2015). It contains 237 relations and 14,541 entities, for a total of 298,970 triples.
The SVO dataset is a subset of subject-relation-object triplets extracted from Wikipedia articles, containing entities, relation types and M triples.
For WN18, FB15k and FB15k-237, the goal is to predict one end of a triple given the other end and the relation, e.g., the subject given the object and the relation. We report Hit@10, also known as Recall@10, on raw and filtered datasets. Raw means the standard recall measure while filtered means that every relation that already exists in the KB are first removed, even those in the test set. The filtered measure allows a direct comparison of the target entity with negative ones. On SVO, the goal is to predict the relation given a pair of entities. The measure is Hit@5, i.e., Hit@227 for relation types.
For both WN18, FB15k and FB15k-237, we use a negative sampling approximation of the softmax and select the hyper-parameters based on the filtered hits@10 on the validation set. On WN18 and FB15k,he grid of parameters used is for the embedding size ,
for the number of epochs andfor the number of negative examples. Since FB15k-237 is much smaller, we limit the number of epochs to . The initial learning rate is fixed at . On WN18, the best set of hyper-parameters are dimensions, epochs and negative samples. On FB15k, the selected hyper-parameters are dimensions, epochs and negative samples. On FB15k-237, the best set of hyper-parameters are a hidden of , epochs and a negative samples. For SVO, the number of relations to predict is quite small, we thus use a full softmax and select hyper-parameters based on hit@5%. The grid of hyper-parameters is for the embedding size and for the number of epochs. The initial learning rate is fixed at . For all these experiments, we report both the performance on the model train on the train set and on the concatenation of the train and validation set, run with the same hyper-parameters.
We compare our approach to several standard models in Table 1 on WN18 and FB15k. We report numbers from their original papers. Some of them are not using a fine grid of hyper-parameters, which partially explains the gap in performance. We separate these models from more recent ones for fairer comparison. Despite its simplicity, our approach is competitive with dedicated pipelines both for raw and filtered measurements. This extends the findings of Trouillon and Nickel (2017), i.e., the choice of loss function can have a significant impact on overall performance. Table 3 extends this observation to a harder dataset, FB15k-237, where our BoW model compares favorably with existing KB completion models.
We also report comparison on relation prediction dataset SVO in Table 3. Our approach is competitive with approaches using bigram and high order information, like TATEC (Garcia-Duran et al., 2015). Note TATEC can be, theoretically, used for both relation and entity prediction, while our model only predicts relations.
3.2 Question answering.
We consider two standard datasets with a significant amount of question answer pairs.
SimpleQuestion consists of 108,442 question-answer pairs generated from Freebase. It comes with a subset of Freebase with 2M triplets.
WikiMovies consists of more than 100,000 questions about movies generated also from Freebase. It comes with a subset of the KB associated with the question-answer pairs. This dataset also provides with settings where different preprocessed versions of Wikipedia are considered instead of the KB. These settings are beyond the scope of this paper.
For both SimpleQuestion and MovieWiki, the number of relations are relatively small. We thus use a full softmax. For SimpleQuestion, the grid of hyper-parameters is for the dimension of the embeddings and for the number of epochs. We use bigrams and an initial learning rate of . For MovieWiki, we fixed the embedding size to since there are only relations and the number of epochs was selected on the validation set in . We use an initial learning rate of .
|Random guess (Bordes et al., 2015)||4.9|
|CFO (Dai et al., 2016)||62.6|
|MemNN (Bordes et al., 2015)||62.7|
|AMPCNN (Yin et al., 2016)||68.3|
|CharQA (Golub and He, 2016)||70.9|
|CFO + AP (Dai et al., 2016)||75.7|
|AMPCNN + AP (Yin et al., 2016)||76.4|
|fastText - train||72.7|
|fastText - train+valid||73.0|
Figure 5 compares this approach with the state-of-the-art. We learn a relation classifier with fastText in 42sec. Using a larger KB, i.e., FB5M, does not degrade the performance, despite having much more irrelevant entities. Our approach compares favorably well other with question answering systems. This suggests that the learned embeddings capture some important information about the KB. Note, however, that the performance is very sensible to the quality of the entity linker and the ad-hoc sorting of extracted subjects. Typically, going from a random order to the one used in this paper gives a boost of up to depending on the hyper-parameters.
Table 6 compares our models with several state-of-the-art pipelines. In the case where the clean KB is accessible, our method works very well. fastText runs in 1sec. for relation prediction. Note that this dataset was primarily made for the case where only text is available. This setting goes beyond the scope of our method, while a more general approach like KV-memNN still works reasonably well (Miller et al., 2016).
In this paper, we show that linear models learn good embeddings from a KB by recasting graph related problems into supervised classification ones. The limitations of such approach are that it requires a clean KB and a task that uses direct information about local connectivity in the graph. Moreover, the observation that our non-relational approach provides state-of-the-art performance on KBC benchmarks raises also important questions regarding the evaluation of link-prediction models and the design of benchmarks for this task.
We thank Timothée Lacroix, Nicolas Usunier, Antoine Bordes and the rest of FAIR for their precious help and comments. We also would like to thank Adam Fisch and Alex Miller for their help regarding MovieWiki.
- Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5:135–146.
- Bollacker et al. (2008) Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data. AcM, pages 1247–1250.
- Bordes et al. (2014a) Antoine Bordes, Sumit Chopra, and Jason Weston. 2014a. Question answering with subgraph embeddings. arXiv preprint arXiv:1406.3676 .
- Bordes et al. (2014b) Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014b. A semantic matching energy function for learning with multi-relational data. Machine Learning 94(2):233–259.
- Bordes et al. (2015) Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 .
- Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems. pages 2787–2795.
- Dai et al. (2016) Zihang Dai, Lei Li, and Wei Xu. 2016. Cfo: Conditional focused neural question answering with large-scale knowledge bases. arXiv preprint arXiv:1606.01994 .
Deng et al. (2014)
Jia Deng, Nan Ding, Yangqing Jia, Andrea Frome, Kevin Murphy, Samy Bengio, Yuan
Li, Hartmut Neven, and Hartwig Adam. 2014.
Large-scale object classification using label relation graphs.
European Conference on Computer Vision. Springer, Cham, pages 48–64.
- Dettmers et al. (2017) Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2017. Convolutional 2d knowledge graph embeddings. arXiv preprint arXiv:1707.01476 .
- Garcia-Duran et al. (2015) Alberto Garcia-Duran, Antoine Bordes, Nicolas Usunier, and Yves Grandvalet. 2015. Combining two and three-way embeddings models for link prediction in knowledge bases. arXiv preprint arXiv:1506.00999 .
- Golub and He (2016) David Golub and Xiaodong He. 2016. Character-level question answering with attention. arXiv preprint arXiv:1604.00727 .
- Hoff et al. (2002) Peter D Hoff, Adrian E Raftery, and Mark S Handcock. 2002. Latent space approaches to social network analysis. Journal of the american Statistical association 97(460):1090–1098.
- Jenatton et al. (2012) Rodolphe Jenatton, Nicolas L Roux, Antoine Bordes, and Guillaume R Obozinski. 2012. A latent factor model for highly multi-relational data. In Advances in Neural Information Processing Systems. pages 3167–3175.
Thorsten Joachims. 1998.
Text categorization with support vector machines: Learning with many relevant features. Springer.
- Joulin et al. (2016a) Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hervé Jégou, and Tomas Mikolov. 2016a. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651 .
- Joulin et al. (2017) Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Association for Computational Linguistics, pages 427–431.
- Joulin et al. (2016b) Armand Joulin, Laurens van der Maaten, Allan Jabri, and Nicolas Vasilache. 2016b. Learning visual features from large weakly supervised data. In European Conference on Computer Vision. Springer International Publishing, pages 67–84.
- Kadlec et al. (2017) Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. arXiv preprint arXiv:1705.10744 .
- Lin et al. (2015) Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In AAAI. pages 2181–2187.
- Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .
- Miller et al. (2016) Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. arXiv preprint arXiv:1606.03126 .
Nickel et al. (2016)
Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016.
Holographic embeddings of knowledge graphs.
Thirtieth AAAI Conference on Artificial Intelligence.
- Nickel et al. (2011) Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th international conference on machine learning (ICML-11). pages 809–816.
- Nickel et al. (2012) Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2012. Factorizing yago: Scalable machine learning for linked data. In Proceedings of the 21st International Conference on World Wide Web. ACM, pages 271–280.
- Perozzi et al. (2014) Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 701–710.
- Schlichtkrull et al. (2017) Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. arXiv preprint arXiv:1703.06103 .
- Schutze (1992) Hinrich Schutze. 1992. Dimensions of meaning. In Supercomputing.
- Shen et al. (2016) Yelong Shen, Po-Sen Huang, Ming-Wei Chang, and Jianfeng Gao. 2016. Implicit reasonet: Modeling large-scale structured relationships with shared memory. arXiv preprint arXiv:1611.04642 .
- Toutanova et al. (2015) Kristina Toutanova, Danqi Chen, and Patrick Pantel. 2015. Representing text for joint embedding of text and knowledge bases. In EMNLP.
- Trouillon et al. (2017) Théo Trouillon, Christopher R Dance, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2017. Knowledge graph completion via complex tensor factorization. arXiv preprint arXiv:1702.06879 .
- Trouillon and Nickel (2017) Théo Trouillon and Maximilian Nickel. 2017. Complex and holographic embeddings of knowledge graphs: a comparison. arXiv preprint arXiv:1707.01475 .
- Wang and Manning (2012) Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In ACL.
- Yang et al. (2014) Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 .
- Yao and Van Durme (2014) Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In ACL (1). pages 956–966.
- Yin et al. (2016) Wenpeng Yin, Mo Yu, Bing Xiang, Bowen Zhou, and Hinrich Schütze. 2016. Simple question answering by attentive convolutional neural network. arXiv preprint arXiv:1606.03391 .