In recent years, Knowledge Graph Embedding (KGE) methods have been applied in benchmark datasets including Wikidata (Free (2014)), Freebase (Bollacker et al. (2008)), DBpedia (Auer et al. (2007)), and YAGO (Suchanek et al. (2017)). Applications of KGE methods include fact prediction, question answering, and recommender systems.
KGE is an active area of research and many authors have provided reference software implementations. However, most of these are standalone reference implementations and therefore it is difficult and time-consuming to: (i) find the source code; (ii) adapt the source code to new datasets; (iii) correctly parameterize the models; and (iv) compare against other methods. Recently, this problem has been partially addressed by libraries such as OpenKE Han et al. (2018) and AmpliGraph Costabello et al. (2019) that provide a framework common to several KGE methods. However, these frameworks take different perspectives, make specific assumptions, and thus the resulting implementations diverge substantially from the original architectures. Furthermore, these libraries often force the user to use preset hyperparameters, or make implicit use of golden hyperparameters, and thus make it tedious and time-consuming to adapt the models to new datasets.
This paper presents pykg2vec, a single Python library with 16 state-of-the-art KGE methods. The goals of pykg2vec are to be practical and educational. The practical value is achieved through: (a) proper use of GPUs and CPUs; (b) a set of tools to automate the discovery of golden hyperparameters; and (c) a set of visualization tools for the training and results of the embeddings. The educational value is achieved through: (d) a modular and flexible software architecture and KGE pipeline; and (e) access to a large number of state-of-the-art KGE models.
2 Knowledge Graph Embedding Methods
A knowledge graph contains a set of entities and relations between entities. The set of facts in the knowledge graph are represented in the form of triples , where are referred to as the head (or subject) and the tail (or object) entities, and is referred to as the relationship (or predicate).
The problem of KGE is in finding a function that learns the embeddings of triples using low dimensional vectors such that it preserves structural information,. To accomplish this, the general principle is to enforce the learning of entities and relationships to be compatible with the information in . The representation choices include deterministic point (Bordes et al. (2013)
), multivariate Gaussian distribution (He et al. (2015)), or complex number (Trouillon et al. (2016)). Under the Open World Assumption (OWA), a set of unseen negative triplets, , are sampled from positive triples by either corrupting the head or tail entity. Then, a scoring function, is defined to reward the positive triples and penalize the negative triples. Finally, an optimization algorithm is used to minimize or maximize the scoring function.
KGE methods are often evaluated in terms of their capability of predicting the missing entities in negative triples or
, or predicting whether an unseen fact is true or not. The evaluation metrics include the rank of the answer in the predicted list (mean rank), and the ratio of answers ranked top-k in the list (hit-k ratio).
3 Software Architecture
The pykg2vec library is built using Python and TensorFlow. TensorFlow allows the computations to be assigned on both GPU and CPU. In addition to the main model training process, pykg2vec utilizes multi-processing for generating mini-batches and performing an evaluation to reduce the total execution time. The various components of the library (see Figure 1) are as follows:
KG Controller: handles all the low-level parsing tasks such as finding the total unique set of entities and relations; creating ordinal encoding maps; generating training, testing and validation triples; and caching the dataset data on disk to optimize tasks that involve repetitive model testing.
Batch Generator: consists of multiple concurrent processes that manipulate and create mini-batches of data. These mini-batches are pushed to a queue to be processed by the models implemented in TensorFlow. The batch generator runs independently so that there is a low latency for feeding the data to the training module running on the GPU.
Core Models: consists of 16 KGE algorithms implemented as Python modules in TensorFlow. Each module consists of a modular description of the inputs, outputs, loss function, and embedding operations. Each model is provided with configuration files that define its hyperparameters.
Configuration: provides the necessary configuration to parse the datasets and also consists of the baseline hyperparameters for the KGE algorithms as presented in the original research papers.
Trainer and Evaluator: the Trainer module is responsible for taking an instance of the KGE model, the respective hyperparameter configuration, and input from the batch generator to train the algorithms. The Evaluator module performs link prediction and provides the respective accuracy in terms of mean ranks and filtered mean ranks.
Visualization: plots training loss and common metrics used in KGE tasks. To facilitate model analysis, it also visualizes the latent representations of entities and relations on the 2D plane using t-SNE based dimensionality reduction.
Bayesian Optimizer: pykg2vec uses a Bayesian hyperparameter optimizer to find a golden hyperparameter set. This feature is more efficient than brute-force based approaches.
4 Usage Examples
Pykg2vec provides users with two utilization examples ( and ) available in the pykg2vec/example folder. Training is performed with the following script:
To apply the best setting described in the paper, the following script can be invoked.
Some of the results plotted after training and are shown in Figure 2.
|(a) Mean ranks of the algorithms||(b) Hit ratios of the algorithms|
|(c) Entity and relation embedding plot||(d) Loss value plot|
Pykg2vec is a Python library with extensive documentation that includes the implementations of a variety of state-of-the-art Knowledge Graph Embedding methods and modular building blocks of the embedding pipeline. This library aims to help researchers and developers to quickly test algorithms against their custom knowledge base or utilize the modular blocks to adapt the library for their custom algorithms.
- Auer et al. (2007) Sören Auer, Christian Bizer, Georgi Kobilarov, et al. Dbpedia: A nucleus for a web of open data. In The semantic web. Springer, 2007.
- Bollacker et al. (2008) Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. Proc. of SIGMOD’08, pages 1247–1250, 2008. ISSN 07308078. doi: 10.1145/1376616.1376746. URL http://doi.acm.org/10.1145/1376616.1376746.
- Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787–2795, 2013.
- Costabello et al. (2019) Luca Costabello, Sumit Pai, Chan Le Van, Rory McGrath, and Nicholas McCarthy. AmpliGraph: a Library for Representation Learning on Knowledge Graphs, March 2019. URL https://doi.org/10.5281/zenodo.2595043.
- Free (2014) A Free. Wikidata : A Free Collaborative. pages 1–7, 2014.
Han et al. (2018)
Xu Han, Shulin Cao, Xin Lv, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi
Openke: An open toolkit for knowledge embedding.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 139–144, 2018.
- He et al. (2015) Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. Learning to Represent Knowledge Graphs with Gaussian Embedding. pages 623–632, 2015. doi: 10.1145/2806416.2806502.
- Suchanek et al. (2017) Fabian Suchanek, Gjergji Kasneci, Gerhard Weikum, Fabian Suchanek, Gjergji Kasneci, Gerhard Weikum, Yago A Core, Fabian M Suchanek, and Gerhard Weikum. Yago : A Core of Semantic Knowledge Unifying WordNet and Wikipedia To cite this version : YAGO : A Core of Semantic Knowledge Unifying WordNet and Wikipedia. 2017.
- Trouillon et al. (2016) Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex Embeddings for Simple Link Prediction. 48, 2016. URL http://arxiv.org/abs/1606.06357.