Knowledge Graph Alignment using String Edit Distance

03/13/2020 ∙ by Navdeep Kaur, et al. ∙ 0

In this work, we propose a novel knowledge base alignment technique based upon string edit distance that exploits the type information about entities and can can find similarity between relations of any arity

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Knowledge Graph Alignment

Knowledge Graphs (KG) are a rich source of structured knowledge that can be leveraged to solve important AI tasks such as question answering [3], relation extraction [24], recommender systems [30]. Consequently, the past decade has witnessed the development of large-scale knowledge graphs like Freebase[1], Wordnet[13], Yago[20], DBpedia[9], NELL[4] that store billions of facts about the world. Typically, a knowledge graph stores knowledge in the form of triples where is the relation between entity and . Even though knowledge graphs are extremely large and are growing with each day, they are still incomplete with important links missing between entities. This problem of predicting missing links between known entities is known as Knowledge Graph Completion (KBC). Over the years, embeddings based models [2, 14, 15, 19, 23, 28]

have, unarguably, become the most dominant methodology for Knowledge Graph Completion. A knowledge-graph embedding is a low-dimensional vector representation of entities and relations which are further composed by linear algebra in order to predict the missing links in a given knowledge graph.

Though highly useful in solving AI tasks, another downside of current knowledge graphs is that each of them has been developed by independent organizations by crawling facts from different sources, by utilizing different algorithms, and that sometimes, results in knowledge graphs in different languages. As a result, the knowledge embodied in these different graphs is heterogeneous and complementary [32]. This necessitates the need for integrating them in order to form one unified knowledge graph that would form a richer source of knowledge to solve AI problems more effectively. As a first step towards integrating these knowledge graphs, one needs to address the following issues, which collectively are known as knowledge graph alignment: entity alignment (entity resolution) that aims at finding entities in different knowledge bases being integrated which, in fact, refer to same real-world entity triple-wise alignment focuses on finding triples in two knowledge graphs that refer to the same real-world fact. For instance, even though triple () in Freebase and triple () in Dbpedia represent to same fact - Barack Obama was born in Honolulu - they are represented with different identities of entities and relations in two knowledge graphs.

Motivated by their success inside single knowledge graph problems, more recently, embeddings have been employed to perform knowledge graph alignment across multiple knowledge graphs. One of the primitive work along this line is Chen et al. [7], that encodes entities and relations of two knowledge graphs into two separate embeddings space and proposes three methods of transitioning from an embedding to its counterpart in other space. Following this work, more advanced approaches for knowledge graph alignment have been proposed that can mainly be divided into three main categories:

  • The first set of models overcome the problem of low availability of aligned entities and aligned triples across multiple knowledge graphs. As low availability of training data can hinder the performance of model, these works increase the size of the training data either iteratively [32]; or via bootstrapping approach [22]; or by co-training [6] technique.

  • Another line of research is based on the idea that in addition to utilizing the knowledge in standard relation triples, there is rich semantic knowledge present in the knowledge graphs in the form of properties and text description of entities which can be harnessed to improve the performance of model [21, 31, 33].

  • The third line of research is focused on designing models that overcome the limitations of translation based embeddings models [10], as they exploit standard Graph Convolutional Networks [25], their relational variants [26, 29] and Wasserstein GAN [17] in order to learn the embeddings of entities and relations in multiple knowledge graphs.

1.1 Motivation

In this work, we propose a novel knowledge base alignment technique based upon string edit distance that addresses the following limitations of the existing models:

  • Even though the past techniques have exploited the supplementary knowledge present in KBs in the form of text description of entities, properties of entities as attributional embeddings; none of them has exploited the rich semantic knowledge present in the type descriptions of the entities. As shown in the past [5, 12, 27, 8], incorporating type information into a single KB model assist in performance boost of the model. Likewise, we conjecture a performance improvement in knowledge alignment task by utilizing the type information. Further, use of type information can help the model deal with polysemy issues present in KBs.

  • As we explain in detail in the next section, we consider multiple possible interactions between triples of two knowledge graphs by performing all possible edit distances between two triples. This is different from the linear transformation model

    [7] that only considers one possible way of transformation between corresponding entities/relations in two triples. Multiple transformations allow multiple ways in which two similar triples can be brought closer to each other in embedding space.

  • Finally, all the past models have considered triple-wise alignment between triples whereas our proposed model can find similarity between relations of any arity. For instance, if our task is to perform threshold-based classification between two relations, say, , , where is the threshold for positive classification, then our proposed model can find the edit distance between two relations of different arity.

2 Knowledge Alignment by String edit distance in embedding space

We consider a multi-lingual knowledge base that consists of a set of languages. Specifically, we consider two ordered language pairs where each language consist of set of entities , relations and triples . Similarly, . We aim at finding the distance between triples such that the distance between aligned triples is always less than misaligned triples. Formally,

(1)

where ,   and . The corrupted sample set is defined as where .

2.1 String-edit distance

The distance function of our model is inspired by the edit distance computation between a pair of strings (x, y) by memoryless stochastic transducer proposed by Ristad and Yianilos [18, 16]. The idea was that a transducer receives an input string x and performs a sequence of edit operations until it reaches the terminal stage when it outputs string y. Edit operations, , performed by transducer were defined as: : substitution of character by character ; : deletion of character ; insertion of character . One sequence of edit operations between (x, y), called edit sequence, is defined as the product of all the edit operations along the sequence. The total edit distance between pair of strings is defined as the sum of all the edit sequences :

(2)

The cost of edit operations, , is a learnable cost that was optimized by EM algorithm in that model.

2.2 String-edit operation :

Stimulated by learning of string-edit distance by Ristad and Yianilos, our goal is to compute the distance between two triples in eqn (1) by formulating them as pair of strings. We aim at considering each aligned triple pair such that is analogous to input string x and being analogous to output string y. Specifically, by considering triple as string , edit distance computation between two strings can be performed by making the following assumptions:

  • Our basic unit of edit operation is one entity or one relation . Further, each entity or each relation are represented by low-dimensional embedding.

  • Our basic edit operation are: (a) substitution of an entity or a relation in by any another entity or relation in i.e , , , for every (b) deletion of an entity or relation present in i.e. for every (c) insertion of an entity or relation present in i.e. for every . We aim to perform edit operations in embedding space.

Figure 1: Knowledge graph alignment by string-edit distance in embedding space.

As can be seen, some of the edit operations such as and are semantically incorrect. To overcome this, we consider three embedding spaces: entity-space, relation-space and string-space (cf. fig. 1). This ensures that original entities’ (or relations’) information is preserved while they participate in the string-edit distance computation. Secondly, this also guarantees that entities are semantically different from relations as we locate them in separate vector space [11].

Specifically, we model all the entities in language and to reside in -dimensional embedding space, i.e. . Further, all the relations in and lie in -dimensional embedding space, i.e. . In order to perform the edit operation between two triples , their constituent entities and relations are first projected onto the -dimensional string-space. For example, embedding corresponding to the triple and in equation (1) are projected onto string-space as follows:

(3)
(4)

where
. Also, we enforce the constraints that the embeddings and the projection matrix lie inside the unit ball i.e. .

The matrices and are the projection matrices that project the relations from the relation-space to the string-space. Similarly, is the projection matrix that project entities from the entity-space to string-space. More specifically, projection matrix represent the type-matrix that encodes the type of entity inside the relation . The total number of type-matrices will be equal to total possible entity types in a knowledge base.

Once the entities and relations of the aligned pairs have been projected to the string-space, they are considered semantically equal. Henceforth, they represent characters of strings upon which we perform string-edit distance operations in the string-space. Consequently, aligned triples provided as training data represent transformed triple after projection. These transformed triples are modeled as string pair in string-space, where each character of the string has its corresponding embedding, which is obtained by projection operation on entities and relations residing in their original embedding space. As a next step, we consider embeddings of characters of string x as set and string y as and define edit operations - substitution, deletion and insertion as follows:

  • substitution operation is difference between embedding of a and b, i.e.

  • deletion operation is the difference between embedding of character a in input string x and special null embedding :

  • insertion operation is the difference between special null embedding and embedding of character b in the output string y:

The next step after computing the edit-operation is determining the edit-sequence between string pair, which is explained in the next section.

2.3 Edit-sequence and the Edit-distance computation

As discussed previously, one edit-sequence is a sequence of edit operations, , performed between a pair of strings starting at input string x and reaching output string y. We define one edit-sequence as an element-wise dot product of embeddings obtained as a result of edit operation, , between string pairs (). This is followed by L2-norm, in order to obtain a scalar value for one possible edit distance between (). Formally,

(5)

where are the vector obtained for each edit operation previously in the string-space. is the element-wise dot product of the vectors and is the -th element of the vector . As there can be multiple edit sequences possible between triples , the final distance between the pair of relation triples is defined as an average of all the edit sequences.

(6)

where , number of edit sequences between triples . To train the proposed model, we minimize margin-based ranking criteria over the aligned training pairs :

(7)

where and , = max{0, x}, margin

is the hyperparameter. The negative example

is obtained by corrupting positive example (cf. eqn (1)).

References

  • [1] K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD, Cited by: §1.
  • [2] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. In NeurIPS, Cited by: §1.
  • [3] A. Bordes, J. Weston, and N. Usunier (2014) Open question answering with weakly supervised embedding models. In ECML-PKDD, Cited by: §1.
  • [4] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. Hruschka,Jr., and T. M. Mitchell (2010) Toward an architecture for never-ending language learning. In AAAI, Cited by: §1.
  • [5] K. Chang, W. Yih, B. Yang, and C. Meek (2014)

    Typed tensor decomposition of knowledge bases for relation extraction

    .
    In EMNLP, Cited by: 1st item.
  • [6] M. Chen, Y. Tian, K. Chang, S. Skiena, and C. Zaniolo (2018) Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In IJCAI, Cited by: 1st item.
  • [7] M. Chen, Y. Tian, M. Yang, and C. Zaniolo (2016) Multi-lingual knowledge graph embeddings for cross-lingual knowledge alignment. In IJCAI, Cited by: 2nd item, §1.
  • [8] D. Krompaß, S. Baier, and V. Tresp (2015) Type-constrained representation learning in knowledge graphs. In ISWC, Cited by: 1st item.
  • [9] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. Mendes, S. Hellmann, M. Morsey, P. Van Kleef, S. Auer, and C. Bizer (2014) DBpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal 6 (2). Cited by: §1.
  • [10] S. Li, X. Li, R. Ye, M. Wang, H. Su, and Y. Ou (2018) Non-translational alignment for multi-relational networks. In IJCAI, Cited by: 3rd item.
  • [11] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu (2015) Learning entity and relation embeddings for knowledge graph completion. In AAAI, Cited by: §2.2.
  • [12] S. Ma, J. Ding, W. Jia, K. Wang, and M. Guo (2017) TransT: type-based multiple embedding representations for knowledge graph completion. In ECML-PKDD, Cited by: 1st item.
  • [13] G. A. Miller (1995) WordNet: a lexical database for english. Communications of the ACM 38 (11). Cited by: §1.
  • [14] M. Nickel, L. Rosasco, and T. Poggio (2016) Holographic embeddings of knowledge graphs. In AAAI, Cited by: §1.
  • [15] M. Nickel, V. Tresp, and H. Kriegel (2011) A three-way model for collective learning on multi-relational data. In ICML, Cited by: §1.
  • [16] J. Oncina and M. Sebban (2006) Learning stochastic edit distance: application in handwritten character recognition. Pattern Recognition 39 (9). Cited by: §2.1.
  • [17] S. Pei, L. Yu, and X. Zhang (2018) Improving cross-lingual entity alignment via optimal transport. In IJCAI, Cited by: 3rd item.
  • [18] E. S. Ristad and P. N. Yianilos (1998) Learning string-edit distance. IEEE Transaction on Pattern Analysis and Machine Intelligence 20 (5). Cited by: §2.1.
  • [19] R. Socher, D. Chen, C. D. Manning, and A. Ng (2013) Reasoning with neural tensor networks for knowledge base completion. In NeuRIPS, Cited by: §1.
  • [20] F. M. Suchanek, G. Kasneci, and G. Weikum (2007) Yago: a core of semantic knowledge. In WWW, Cited by: §1.
  • [21] Z. Sun, W. Hu, and C. Li (2017) Cross-lingual entity alignment via joint attribute-preserving embedding. In ISWC, Cited by: 2nd item.
  • [22] Z. Sun, W. Hu, Q. Zhang, and Y. Qu (2018) Bootstrapping entity alignment with knowledge graph embedding. In IJCAI, Cited by: 1st item.
  • [23] T. Trouillon, J. Welbl, S. Riedel, E. Gaussier, and G. Bouchard (2016) Complex embeddings for simple link prediction. In ICML, Cited by: §1.
  • [24] Z. Wang, J. Zhang, J. Feng, and Z. Chen (2014) Knowledge graph and text jointly embedding. In EMNLP, Cited by: §1.
  • [25] Z. Wang, Q. Lv, X. Lan, and Y. Zhang (2018) Cross-lingual knowledge graph alignment via graph convolutional networks. In EMNLP, Cited by: 3rd item.
  • [26] Y. Wu, X. Liu, Y. Feng, Z. Wang, R. Yan, and D. Zhao (2019) Relation-aware entity alignment for heterogeneous knowledge graphs. In IJCAI, Cited by: 3rd item.
  • [27] R. Xie, Z. Liu, and M. Sun (2016) Representation learning of knowledge graphs with hierarchical types. In IJCAI, Cited by: 1st item.
  • [28] B. Yang, W. Yih, X. He, J. Gao, and L. Deng (2015) Embedding entities and relations for learning and inference in knowledge bases. In ICLR, Cited by: §1.
  • [29] R. Ye, X. Li, Y. Fang, H. Zang, and M. Wang (2019) A vectorized relational graph convolutional network for multi-relational network alignment. In IJCAI, Cited by: 3rd item.
  • [30] F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W. Ma (2016) Collaborative knowledge base embedding for recommender systems. In KDD, Cited by: §1.
  • [31] Q. Zhang, Z. Sun, W. Hu, M. Chen, L. Guo, and Y. Qu (2019) Multi-view knowledge graph embedding for entity alignment. In IJCAI, Cited by: 2nd item.
  • [32] H. Zhu, R. Xie, Z. Liu, and M. Sun (2017) Iterative entity alignment via joint knowledge embeddings. In IJCAI, Cited by: 1st item, §1.
  • [33] Q. Zhu, X. Zhou, J. Wu, J. Tan, and L. Guo (2019) Neighborhood-aware attentional representation for multilingual knowledge graphs. In IJCAI, Cited by: 2nd item.