Over the last decade there has been a very popular trend of merging neural and symbolic representations of knowledge for the large, general-purpose knowledge graphs such as FreeBase  and WordNet . The utilized methods can be roughly divided into two groups: i) multi-relational knowledge graph embeddings [3, 4] and ii) graph embeddings [5, 6]. The former aims at learning representations of both entities and relations, while the latter focus on the untyped graphs, where each relation’s type can be dropped without introducing ambiguities. Both approaches aim at solving the problem of link prediction
, i.e., modeling the probability of an instance of a relation (e.g.,) based on
-dimensional vector representations (e.g.,) and binary operations defined on them. Thus, in the case of multi-relational knowledge graphs we seek to embed both entities and relations into -dimensional vector space, and we model the probability of a triple (labeled arc of a graph) as (Euclidean dot product). In the case of unlabeled graphs we drop the labels of the arcs (or edges in case the relations can be treated as symmetric), we therefore do not embed the relations, and model one single arc (or edge) directly as . The Euclidean dot product is only of the many ways to model a probability of having a link (with a label in the multi-relational case) between the two entities . In fact, the underlying geometry may not necessarily be Euclidean, for more in-depth survey of link prediction methodologies please see . In the context of Semantic Web technologies and the Resource Description Framework (RDF) and Web Ontology (OWL) technology stack specialized knowledge graph embedding methodologies have also recently been proposed [7, 8].
In the bioinformatics domain Alshahrani et al.  recently proposed a novel methodology for representing nodes and relations from structured biological knowledge that operates directly on Linked Data resources, leverages ontologies, and yields neuro-symbolic representations amenable for down-stream use in machine learning algorithms. The authors base their methodology on the DeepWalk algorithm, which performs random walks on the unlabeled and undirected graphs (i.e., with symmetric relations)  and embeds entities through an approach inspired by the popular Word2Vec algorithm . This methodology is further tuned for multi-relational data by explicitly encoding the sequences of intermingled entities and relations. Such complex intermingled sequences alleviate the innate undirected nature of the random walks, at the expense of increased number of parameters to train. Unfortunately, training such models is computationally expensive (hours on a modern intel core i7 desktop machine) and requires relatively large embedding dimensions (). This manuscript builds upon this seminal work and proposes a more economical, fast and scalable way of learning neuro-symbolic representations. The neural embeddings obtained with our approach outperform published state-of-the-art results, with specific assumptions on the structure of the original knowledge graph, and with the smart encoding of links based on the embeddings of the entities. Among other things the contributions of this work are based on the following hypotheses:
Using the concatenation of the neural embeddings naturally encodes the directionality of the asymmetric biological relations, and fully exploits the non-linear patterns that can be uncovered by the neural network classifiers.
2 Materials and methods
2.1 Dataset and evaluation methodology for link prediction used
In this work we consider the curated biological knowledge graph, presented in . This knowledge graph is based on the three ontologies: Gene Ontology , Human Phenotype Ontology  and the Disease Ontology . It also incorporates the knowledge from several biological databases, including human proteins interactions, human chemical-protein interactions and drug side effects and drug indications pairs. We refer the reader to  for the detailed description on provenance of the data, and on data processing pipelines employed to obtain the final graph. For the purpose of this work, we summarize the number of biological relation instances present in this knowledge graph in Table 1.
|relation||number of instances|
Our goal is to train fast neural embeddings of the nodes of this knowledge graph, such that we could use these embeddings to perform link prediction. That is, we try to estimate the probability that an edge with label(e.g., ) exists between the nodes (e.g., and ) given their vector representations . As in  we build separate binary prediction models for each relation in the knowledge graph. Note that, in this work we only focus on the link prediction problem where the embeddings are trained on the knowledge graph, in which we remove the 20% of the edges for a given relation (this corresponds to the first link prediction problem reported in 
). We then use these embeddings to train classifiers (logistic regression and multi-layer perceptron (MLP)) on 80% of the positive true edges (i.e., relation instances) and on the same amount of generated negative edges. These classifiers are then tested on the remaining 20% positive and generated negative edges (which have not been used in the embeddings generation). For a fair comparison with the state-of-the-art results, we use the same methodology for negative sample generation, and we use 5-fold cross validation for the training of embeddings and subsequent link prediction classifiers, precisely the same way as in. For all of our experiments we do not use any deductive inference, and compare our obtained results with the results obtained without inference in .
2.2 Assumptions on the structure of the Knowledge Graph
Our methodology exploits the fact that the full biomedical knowledge graph we are using only contains relations that can be inferred from the types of the entities that are object and subject of the relation. This means that arc labels can be safely dropped without the loss of semantics and without the introduction of ambiguous duplicated pairs of nodes (). Therefore, we can flatten our graph without the risk of having more than one relation connecting the same source and target nodes, i.e., we can simply consider our knowledge graph as a set of pairs of nodes . As opposed to DeepWalk employed by , our methodology does not rely on random walks on knowledge graphs ; instead of producing sequences of labeled entities (nodes and arc labels mixed together), we directly consider pairs of connected nodes. Furthermore, we simplify the structure of the knowledge graph by removing anonymous instances that were introduced by the creator of the knowledge graph to assert relation instances in the ABox, i.e., we directly connect OWL classes to de-clutter the graph used to train embeddings. In the original knowledge graph, Alshahrani et al.  commit to strict OWL semantics when modeling biological relations by asserting anonymous instances, for example a relation instance of has-function (domain: Gene/Protein, range: Function) would be encoded as in Listing 1, where we present a specific instance of a relation that asserts that the TRIM28 gene has the function of negative regulation of transcription by RNA polymerase II.
We simplify the knowledge graph by removing all anonymous instances of type <http://aber-owl.net/go/instance_106358> and connecting entities directly through object relations, i.e., we rewrite all triples of the form presented above (Listing 1) to the form that only contains object property assertions as demonstrated below (Listing 2).
We admit such a relaxation in the OWL semantics commitment of the knowledge graph, because we do not leverage any OWL reasoning for our tasks. This relaxation does not change the statistics of the number of biological relation instances present in the knowledge graph (Table 1).
2.3 Training fast log-linear embeddings with StarSpace
As opposed to the approach taken by Alshahrani et al  we employ another neural embedding method which requires fewer parameters and is much faster to train. Specifically, we exploit the fact that the biological relations have well defined non-overlapping domain and ranges, and therefore the whole knowledge graph can be treated as an untyped directed graph, where there is no ambiguity in the semantics of any relation. To this end, we employ the neural embedding model from the StarSpace toolkit , which aims at learning entities, each of which is described by a set of discrete features (bag-of-features) coming from a fixed-length dictionary. The model is trained by assigning a
-dimensional vector to each of the discrete features in the set that we want to embed directly. Ultimately, the look-up matrix (the matrix of embeddings - latent vectors) is learned by minimizing the following loss function
In this loss function, we need to indicate the generator of positive entry pairs – in our setting those are entities connected via a relation – and the generator of negative entities , similar to the -negative sampling strategy proposed by Mikolov et al. . In our setting, the negative pairs are the so-called negative examples, i.e., pairs of entities that do not appear in the knowledge graph. The similarity function is task-dependent and should operate on -dimensional vector representations of the entities, in our case we use the standard Euclidean dot product. Please note that the aforementioned embedding scheme is different from a multi-relational knowledge graph embedding task. The main difference is that we do not require the embeddings for the relations.
Based on the embeddings of the nodes of the graph, we can come up with different ways of representing a link between a node and , as a binary operation defined on the nodes of the graph (see  for more detail). In particular, we employ the so-called concatenation of the embeddings to represent each relation instance as a concatenated vector (Figure 1).
In Table 2 we report the state-of-the-art evaluation scores as provided in Alshahrani et al . Throughout the rest of this manuscript we refer to these results as SOTA results for convenience. We further use these state-of-the-art results to contrast our classification results in Tables 3 and 4. To simplify the interpretation of our results, both Tables 3, 4 report only differences in F-measure and ROC AUC scores for our approach wrt. the SOTA results. Classification results are divided into two parts, differentiated by the classifier used: i) (Table 3) logistic regression (as in ), and ii) (Table 4)) MLP. The two classifiers are trained on concatenated embeddings of entities (nodes), which are obtained from the flattened graphs for each biomedical relations via StarSpace , as described in Section 2. All classification results presented here are averaged over 5 folds to be directly and fairly compared with the results in .
State of the art F-measure and ROC AUC evaluation metrics. Rows in dark gray emphasize the worst performing link prediction tasks.
3.1 Biomedical link prediction with logistic regression
Overall, we are able to outperform SOTA results on all relations except for has-target (Table 3). It is important to notice that we improve significantly on has-indication and has-disease-phenotype - the two worst performing relations in Alshahrani et al . We specifically consider the embeddings of rather small sizes () to emphasize the rapidity and scalability of training embeddings using log-linear neural embedding approaches 
. For all embedding dimensions we train our embeddings for at most 10 epochs, which keeps overall training time of embeddings for one specific biomedical relation under 1 minute on a Core i7 desktop with 32GB of RAM. It is also important to notice that the SOTA results were obtained via the extendedDeepWalk algorithm  with 512 dimensions for the embeddings, which takes several hours to train on our machine. Moreover, our learned embeddings are more consistent, as they have a 0.92-0.99 F-measure and ROC AUC range for all relations, whereas SOTA embeddings range from 0.72 to 0.94.
3.2 MLP and biomedical link prediction
We hypothesize that our approach of augmented embedding dimension via concatenation of entity embeddings is more suited for neural network architectures. Indeed, we are able to obtain very good biological link prediction classifiers by using concatenated embeddings and multi-layer perceptrons. We experimeted with different shallow and deep architectures (hidden layer sizes (, [20, 20, 20], [200, 200, 200]), which yielded almost similar performances. The results of a shallow neural networks with one hidden layer consisting of 200 neurons are summarized in Table4, that empirically show that the concatenation of the neural embeddings to represent a link between the two entities fully exploits the non-linearity patterns, which can be uncovered by the neural network classifiers. As a result, we are able to improve the SOTA results for all the biological link prediction tasks.
4 Discussion and conclusion
Recent trends of neuro-symbolic embeddings continue the long-sought quest of the artificial intelligence community to unify the two disparate worlds, where the reasoning is performed either in a discrete symbolic space or in a continuous vector space. As a community, we are still somewhere along this road, and up to date there has still been no evidence of a clear way of combining the two approaches. The neuro-symbolic representations based on random walks on RDF data for the general biological knowledge as introduced by  are an important first development. The methodology allows for leveraging the existing curated and structured biological knowledge (Linked Data), incorporating OWL reasoning, and enabling the inference of hidden links that are implicitly encoded in the biological knowledge graphs. However, as our results demonstrate, it is possible to obtain improved classification results for link prediction if we relax the constraints of multi-relational biological knowledge structure, and consider all arcs as part of one semantic relation. Such a relaxation gives rise to faster and more economical generation of neural embeddings, which can be further used in scalable downstream machine learning tasks. While our results demonstrate excellent prediction performance (all F-measure and ROC AUC scores range in 0.92-0.99), they outline that having very well-structured input data is a core ingredient. Indeed, the biological knowledge graph curated by Alshahrani et al.  implicitly encodes significant biological knowledge available to the community, and simple log-linear embeddings coupled with shallow neural networks are enough to obtain very good prediction results for the transductive link prediction problems. Unfortunately, the quest of merging symbolic and continuous representations cannot be fulfilled to its advertised limits, as was already mentioned in , symbolic inference (OWL-EL reasoning) do not yield significant improvements on link prediction tasks. Indeed, we managed to get very good scores without any deductive completion of the Abox of the knowledge graph. Another important aspect which we implicitly emphasized in our work is the evaluation strategy of the neural embeddings. When dealing with big and rich knowledge graphs one has to meticulously generate train and test splits, which avoid potential leakage of information between the two sets. Failing to do so might lead to the models which overfit and are unable to truly perform link predictions. As part of our future work we would like to focus on the creation of different evaluation strategies that test the quality of the neural embeddings, their explainability, and we would like to consider not only transductive link prediction problems, but also focus on the more challenging inductive cases.
-  Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: A collaboratively created graph database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD international conference on Management of data - SIGMOD ’08, New York, New York, USA, ACM Press (jun 2008) 1247
-  Miller, G.A.: Wordnet: a lexical database for english. Commun ACM 38(11) (nov 1995) 39–41
-  Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. (2013)
-  Nickel, M., Murphy, K., Tresp, V., Gabrilovich, E.: A review of relational machine learning for knowledge graphs. Proc. IEEE 104(1) (jan 2016) 11–33
-  Perozzi, B., Al-Rfou, R., Skiena, S.: Deepwalk: Online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’14, New York, New York, USA, ACM Press (aug 2014) 701–710
-  Grover, A., Leskovec, J.: node2vec: Scalable feature learning for networks. KDD 2016 (aug 2016) 855–864
-  Ristoski, P., Paulheim, H.: Rdf2vec: Rdf graph embeddings for data mining. In Groth, P., Simperl, E., Gray, A., Sabou, M., Krötzsch, M., Lecue, F., Flöck, F., Gil, Y., eds.: The semantic web – ISWC 2016. Volume 9981 of Lecture notes in computer science. Springer International Publishing, Cham (2016) 498–514
-  Cochez, M., Ristoski, P., Ponzetto, S.P., Paulheim, H.: Global rdf vector space embeddings. In d’Amato, C., Fernandez, M., Tamma, V., Lecue, F., Cudré-Mauroux, P., Sequeda, J., Lange, C., Heflin, J., eds.: The semantic web – ISWC 2017. Volume 10587 of Lecture notes in computer science. Springer International Publishing, Cham (2017) 190–207
-  Alshahrani, M., Khan, M.A., Maddouri, O., Kinjo, A.R., Queralt-Rosinach, N., Hoehndorf, R.: Neuro-symbolic representation learning on biological knowledge graphs. Bioinformatics 33(17) (sep 2017) 2723–2730
-  Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. arXiv (oct 2013)
-  Wu, L., Fisch, A., Chopra, S., Adams, K., Bordes, A., Weston, J.: Starspace: Embed all the things! arXiv (sep 2017)
-  Ashburner, M., Ball, C.A., Blake, J.A., Botstein, D., Butler, H., Cherry, J.M., Davis, A.P., Dolinski, K., Dwight, S.S., Eppig, J.T., Harris, M.A., Hill, D.P., Issel-Tarver, L., Kasarskis, A., Lewis, S., Matese, J.C., Richardson, J.E., Ringwald, M., Rubin, G.M., Sherlock, G.: Gene ontology: tool for the unification of biology. the gene ontology consortium. Nat Genet 25(1) (may 2000) 25–29
-  Köhler, S., Doelken, S.C., Mungall, C.J., Bauer, S., Firth, H.V., Bailleul-Forestier, I., Black, G.C.M., Brown, D.L., Brudno, M., Campbell, J., FitzPatrick, D.R., Eppig, J.T., Jackson, A.P., Freson, K., Girdea, M., Helbig, I., Hurst, J.A., Jähn, J., Jackson, L.G., Kelly, A.M., Ledbetter, D.H., Mansour, S., Martin, C.L., Moss, C., Mumford, A., Ouwehand, W.H., Park, S.M., Riggs, E.R., Scott, R.H., Sisodiya, S., Van Vooren, S., Wapner, R.J., Wilkie, A.O.M., Wright, C.F., Vulto-van Silfhout, A.T., de Leeuw, N., de Vries, B.B.A., Washingthon, N.L., Smith, C.L., Westerfield, M., Schofield, P., Ruef, B.J., Gkoutos, G.V., Haendel, M., Smedley, D., Lewis, S.E., Robinson, P.N.: The human phenotype ontology project: linking molecular biology and disease through phenotype data. Nucleic Acids Res 42(Database issue) (jan 2014) D966–74
-  Kibbe, W.A., Arze, C., Felix, V., Mitraka, E., Bolton, E., Fu, G., Mungall, C.J., Binder, J.X., Malone, J., Vasant, D., Parkinson, H., Schriml, L.M.: Disease ontology 2015 update: an expanded and updated database of human diseases for linking biomedical knowledge through disease data. Nucleic Acids Res 43(Database issue) (jan 2015) D1071–8