Is Aligning Embedding Spaces a Challenging Task? An Analysis of the Existing Methods

02/21/2020
by   Russa Biswas, et al.
FIZ Karlsruhe GmbH
0

Representation Learning of words and Knowledge Graphs (KG) into low dimensional vector spaces along with its applications to many real-world scenarios have recently gained momentum. In order to make use of multiple KG embeddings for knowledge-driven applications such as question answering, named entity disambiguation, knowledge graph completion, etc., alignment of different KG embedding spaces is necessary. In addition to multilinguality and domain-specific information, different KGs pose the problem of structural differences making the alignment of the KG embeddings more challenging. This paper provides a theoretical analysis and comparison of the state-of-the-art alignment methods between two embedding spaces representing entity-entity and entity-word. This paper also aims at assessing the capability and short-comings of the existing alignment methods on the pretext of different applications.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/28/2019

A Survey on Knowledge Graph Embeddings with Literals: Which model links better Literal-ly?

Knowledge Graphs (KGs) are composed of structured information about a pa...
08/22/2019

Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs

Entity alignment is the task of linking entities with the same real-worl...
11/06/2018

Recurrent Skipping Networks for Entity Alignment

We consider the problem of learning knowledge graph (KG) embeddings for ...
10/26/2020

A Survey of Embedding Space Alignment Methods for Language and Knowledge Graphs

Neural embedding approaches have become a staple in the fields of comput...
01/18/2021

Alignment and stability of embeddings: measurement and inference improvement

Representation learning (RL) methods learn objects' latent embeddings wh...
08/18/2020

Relational Reflection Entity Alignment

Entity alignment aims to identify equivalent entity pairs from different...
05/12/2021

Unsupervised Knowledge Graph Alignment by Probabilistic Reasoning and Semantic Embedding

Knowledge Graph (KG) alignment is to discover the mappings (i.e., equiva...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, there has been a rapid growth in the studies related to Representation Learning (RL), i.e., learning representations of input data by encoding different variations of the features. It plays a key role in performing machine learning tasks 

[bengio2013representation]

. The distributed representation of text in the form of word and document vectors 

[DBLP:journals/corr/abs-1301-3781] as well as of the entities and relations of a Knowledge Graph (KG) in the form of their corresponding vectors [DBLP:conf/nips/BordesUGWY13, DBLP:conf/aaai/LinLSLZ15]

have evolved as the prime elements for various Natural Language Processing (NLP) tasks such as Entity Linking 

[DBLP:conf/esws/MorenoBBDLRTG17]

, Named Entity Recognition and Disambiguation 

[yamada2016joint], etc. Word embeddings are a low dimensional vector representation of words capable of capturing the context of a word in a document, semantic and syntactic similarity as well as its relation with other words. Similarly, KG embeddings are a low dimensional vector representation of entities and relations from a KG preserving its (local) inherent structure and capturing the semantic similarity between the entities. Therefore, each embedding space exhibits different characteristics based on the semantic differences in the source of information provided as input.

It has been first observed in [DBLP:journals/corr/MikolovLS13] that continuous word embeddings exhibit similar structures across languages, even for the distant ones such as English and Vietnamese. The similarity has been exploited by learning a linear mapping from the source to a target embedding space. A parallel vocabulary with five thousand words as anchor points has been used to learn the mapping and evaluated against word translation. In [DBLP:conf/eacl/FaruquiD14, DBLP:conf/naacl/XingWLL15, DBLP:conf/emnlp/ArtetxeLA16]

, the authors have attempted to improve cross lingual word embeddings based on bilingual word lexicons. Recent approaches aim to learn unsupervised alignment of monolingual word embedding spaces into a unified vector space without using any parallel corpora 

[DBLP:conf/coling/CaoZZM16, conneau2017word].

On the other hand, alignment of embedding spaces generated from heterogeneous input sources such as multiple KGs, KG and text, etc. have been well studied in [hao2016joint, chen2017multilingual, sun2017cross, zhu2017iterative]. The alignment of the embedding spaces of two KGs refers to align the entities representing the same semantic concepts. In case of different KGs, the alignment would lead to links between multiple KGs. The key challenges encountered while aligning KG embeddings are as follows: (i) Each KG is constructed differently, i.e., variations in the source of information or different ways of structuring the information. For instance, DBpedia is constructed by automatically processing Wikipedia infoboxes, whereas Wikidata is assembled based on collaborative efforts from its user base. (ii) KGs often are multilingual, even though links between the same entities in different languages are already existing, but these links are still far from being complete. (iii) Multilingual KGs may contain complementary or even contradicting information about the same entity in different languages. (iv) Multiple KGs of the same domain have a considerable overlap in their entities and relations and only some of the equivalent entity links exist between them.

The context of an entity mention in textual resources is different from the contextual information of the same entity provided in the KG (i.e., through its surrounding nodes and edges). Textual context provides an insight into the pretext of where the entity has been mentioned in a certain document, whereas KG provides the structured information of the entity and its relation with other entities. Hence, joint training of word and entities in the same vector space yields effective results in NLP applications such as Named Entity Recognition and Disambiguation [yamada2016joint].

This work focuses on the analysis of existing methods for aligning embedding spaces generated from heterogeneous input sources. The contributions include:

  • A detailed description of the current state-of-the-art methods for aligning embedding spaces, their advantages and the challenges.

  • The gaps in the existing research are discussed with an indication to directions of future work.

To the best of our knowledge, this is the first study on the discussion and the comparison of the different alignment methods of embedding spaces. In this paper, the terms embedding space and vector space have been used interchangeably. The rest of the paper is organized as follows: Sect. 2 presents the problem formulation. In Sect. 3, a detailed analysis of alignment methods for embedding spaces is provided, followed by the discussion of the results of the alignment in Sect. 4. Future work and conclusion is described in Sect. 5.

2 Problem Formulation

In this study, the alignment models are categorized into two categories based on the type of the aligned spaces: (i) Entity - Entity, and (ii) Entity - Word

  • Entity - Entity: Independent embedding spaces generated from two KGs are aligned to form a common vector space.

  • Entity - Word: Word and entity embeddings are learned jointly in the same vector space.

Since the embedding spaces are diverse, the challenges in the alignment task which have been analysed in this study are as follows:

  • RQ1How to encode the structural differences of different resources (e.g., text and KG, different KGs such as DBpedia and Wikidata) in the aligned entity space?

  • RQ2How the heterogeneity of different KGs, in case of entity - entity embedding space alignment, are captured and represented in the resultant vector space?

  • RQ3How to combine the equivalent relations and its effect in the aligned embedding space generated from different KGs?

3 Alignment of Embedding Spaces

This section discusses the alignment methods for both the aforementioned categories of the algorithms proposed so far for embedding alignment, i.e., entity-entity and entity-word, followed by a discussion on the drawbacks based on the research questions.

3.1 Entity - Entity Embedding Alignment

JE [hao2016joint] jointly learns embeddings of multiple KGs in a unified vector space with a focus on the entity alignment task in KGs. A set of already aligned entities between the KGs is used as a seed to learn embeddings through TransE [DBLP:conf/nips/BordesUGWY13]

. The loss function is optimized by stochastic gradient descent and a projection matrix is introduced which serves as the transformation of different KG vector spaces. Considering two KGs,

and the embeddings are learned by minimizing the margin-based objective function over the training set.

(1)

where denotes a triple, denotes the positive part, is a margin-hyper-parameter, are ratio hyper-parameters, is the selected seed alignment whose entities are represented by in and in and

(2)

in Eq.(1) plays the soft constraint of the entities and relations by trivially minimizing the loss function which is achieved by increasing the embedding norms. in Eq.(1) helps in learning the alignment between the KGs. The transformation of the different KG vector spaces is done by a projection transformation matrix and second part () of the Eq.(1) is modified to

The algorithm has been evaluated over two datasets: (i) FB15K, a subset of Freebase KG [bollacker2008freebase] which is split into two parts where each split is considered as one KG. (ii) DB-FB dataset consists of a subset of triples from DBpedia and Freebase. The results depict that higher the number of aligned seeds the better the performance of the model. The model performs well for the DB-FB dataset, however, it is difficult to capture the accuracy of the model for the whole DBpedia and Freebase due to sparsity issues.

On the other hand, MTransE [chen2017multilingual] is a translation based multilingual KG embedding model. As a first step it encodes entities and relations of each language in a separate vector space using TransE, followed by transition of each vector to its corresponding cross lingual counterpart in the other space. The loss function is the weighted sum of the knowledge model and the alignment model and is given by,

(3)

where is the knowledge model which is generated by using TransE for each language, is a hyper-parameter that weights the alignment model . The loss function measures the plausibility of all given triples and is given by,

(4)

where represents the set of languages in a multilingual KG, . The model is trained on partially aligned graphs, i.e., aligned triples for the cross lingual KGs. The loss function for the alignment model is given by,

(5)

where the alignment score

iterates through all pairs of aligned triples. Three different techniques have been used to align the vector spaces, i.e., distance-based axis calibration, translation vectors and linear transformation. The distance-based axis calibration penalizes the alignment based on the distances of cross-lingual counterparts such that multilingual expressions of the same entity are closer together after alignment using,

(6)

The coordinates of the same relation in multilingual KGs are converged using,

(7)

The translation vector model consolidates alignment into graph structures and characterizes cross-lingual transitions as regular relational translations and is given by,

(8)

where and are the entity-dedicated and relation-dedicated translation vectors between and , such that for embedding vectors , of the same entity in both languages. Similar translation holds for relations. Finally, the linear transformation model considers transitions as topological transformation of embedding spaces without assuming the similarity of spatial emergence and is defined as,

(9)

where is a square matrix learned as a linear transformation on entity vectors from to and is the dimensionality of the embedding space. For relation, the transformation model is modified to,

(10)

The model has been evaluated against WK31-15K, a dataset containing entities from English, French and German versions. The experiments imply that the variants using linear transformation work better in cross lingual entity matching. The key difference between JE and MTransE is that JE uses aligned entities as seeds whereas MTransE uses aligned triples as seeds.

Joint Attribute Preserving Embedding (JAPE) [sun2017cross] combines structure embedding and attribute embedding to align entities in different KGs. A set of aligned entities between the two KGs are considered as the seeds for this model. TransE is used for generating structure embedding whereas Skip-gram is used for attribute embedding. Each pair in the seed alignment share the same representation to serve as a bridge between and to build an overlay relationship graph. The objective function for the structure embedding is,

(11)

where denotes the set of all positive triples, i.e., the existing triples and denotes the associated negative triples for generated by replacing either the head or the tail by random entity. is the ratio hyper-parameter that weights positive and negative triples in the range of [0,1]. For attribute embedding, given an aligned entity pair , it is assumed that the attributes of and are highly correlated to each other. Skip-gram model is used to predict correlated attributes for a certain attribute minimizing the loss function,

(12)

where denotes the set of positive pairs, i.e., is actually correlated attribute of , and the term

denotes the probability. The joint attribute preserving embedding uses matrices of pairwise similarities between entities as supervised information and minimizes the following object function,

(13)

where is a hyper-parameter that balances similarities between KGs and their inner similarities. denotes the matrix of entity vectors for one KG in S with each row as entity vector. calculates the latent vectors of entities in by accumulating vectors of entities in

based on their similarities measured by using cosine similarity. To preserve both the structure and attribute information, the following objective function is minimized,

(14)

where is the hyper-parameter weighting . The model is evaluated against the DBP15K dataset which is composed of Chinese, Japanese, French and English DBpedia versions. It is used for cross lingual entity alignment tasks from different languages to English. The attribute embedding provides additional information which leads to better performance of the entity alignment task as compared to the previous methods.

Iterative TransE (ITransE) [zhu2017iterative] first learns both entity and relation embeddings, based on aligned entity seeds using TransE followed by a mapping of the embeddings from different KGs. The pre-requisite of this model is that both KGs must contain the same relations. The model introduces a translation model, a linear transformation model, and a parameter sharing model. For the translation model, given two aligned entities and , it is assumed that there exists an alignment relation such that . Therefore, the joint embedding is defined by,

(15)

where and denote the set of entities from and respectively. In the linear transformation model, for the two given aligned entities as mentioned above, a transformation matrix so that . Hence the energy function is defined by,

(16)

For both the aforementioned models, the score function is defined as the sum of the energy functions over alignment seeds and is given by,

(17)

where is the weighted factor and L is the set of aligned seeds. On the other hand, in parameter sharing model, each aligned entity pair is defined as and the distance is measured by,

(18)

Since no regularisation is involved in this model, the scoring function is . Hence, for each non-aligned entity in one KG, the nearest non aligned entity from another KG is given by , which is the newly found entity. These newly discovered entity alignments are included in the initial set of alignment seeds to update the joint embedding. There are two variations of this inclusion of the newly discovered seeds. If all the seeds are added to , it is referred to as hard alignment. But if a reliability score is added to each of these newly discovered aligned entities while including them to the set , then it is called soft alignment. The experiments on the entity alignment task have been carried out on four subsets of FB15K dataset containing overlapping triples. The parameter sharing model outperforms linear transformation and translation models.

Another method has been proposed for cross lingual KG alignment using Graph Convolutional Network (GCN) [wang2018cross]. This method trains GCNs to embed entities of each language to a unified vector space using pre-aligned entities. Relation alignment across KGs is not required to train the model. Entity embeddings are learned from both structural and attribute information of the entities and alignments are discovered based on the distances between them in the vector space. In this approach, two GCN models process two KGs independently to generate the embeddings. Both the GCNs share the same weight matrices. The first layer of each GCN model transforms the input attribute feature vectors and two GCN-models generate attribute embeddings of the same dimensionality. Entity alignment are predicted based on distances between entities from two KGs in the GCN representation space. For entity in and in , the distance measure is computed by,

(19)

where , and denote the structure embedding and attribute embeddings of an entity respectively. and denote the dimensions of structure and attribute embedding respectively, and is the hyper-parameter that balances the importance of both kinds of embeddings. The GCN models are trained by minimizing the following margin-based ranking loss function on a set of pre-aligned entities as training data,

(20)

and

(21)

where denotes the set of negative triples and are margin hyper-parameters separating positive and negative entity alignments. The experiments of the alignment task have been performed on DBP15K dataset for the languages Chinese - English, Japanese - English, and French - English.

Bootstrapping Entity Alignment [sun2018bootstrapping] models the alignment as a classification problem which seeks to maximize alignment likelihood over all labeled and unlabeled entities based on KG embeddings. Let and be the entity sets of and respectively. It considers only one-to-one alignment, i.e., entities in Y are used to label entities in X and only one label is assigned to one entity in X. In order to ensure that the positive triples (aligned entities) have low scores and the negative triples have high scores, the following limit-based objective function is proposed,

(22)

where , are two hyper-parameters, is a balance hyper-parameter, and denote the set of positive and negative triples respectively. Therefore, the drift of the entity embeddings in the unified space is reduced and common semantics of the two KGs are better captured. To leverage prior alignment, for bridging different KGs, aligned entities in their triples are swapped to calibrate the embeddings of and in the unified embedding space. However, there is often inadequate prior alignments which is overcome by bootstrapping. Obeying one-to-one alignment constraint and to label the alignment in the t-th iteration, the optimization problem is solved as follows:

(23)

where denotes the entity embeddings at the t-th iteration, denotes candidate labels of , and is an indicator function. if is labeled as at the t-th iteration and 0 otherwise. The model is evaluated against DBP15K, and DWY100K which comprises of DBP-WD (DBpedia and Wikidata) and DBP-YG (DBpedia and YAGO). It outperforms the state-of-the-art methods, MTransE, ITransE, and JAPE.

Attribute embedding based entity alignment [trisedya2019entity] framework consists of three components namely predicate alignment, embedding learning, and entity alignment. The predicate alignment module merges two KGs into one KG by renaming the partially matched predicates with a similarity threshold of 0.95. Next, the embedding learning phase is comprised of structure embedding and attribute character embedding. TransE has been adapted for the structure embedding of the newly generated merged KG. The following objective function is minimized for the structure embedding,

(24)

and

(25)

where , and are the set of valid and corrupted relation triples respectively, count(r) is the total number of occurrences of a relation , is the total number of triples in the merged KG, and is the weight introduced which controls the embedding learning over the triples. To learn the attribute character embedding, the objective function is given by,

(26)
(27)
(28)

where and are the set of valid and corrupted attribute triples and is a compositional function. SUM is one of the compositional functions which sums up all the character values of the attribute value. On the contrary, LSTM based function converts a sequence of characters into a vector. Finally, attribute character embedding is used to shift the structure embedding into the same vector space by minimizing the following objective function,

(29)

The overall objective function is given by,

(30)

The final module addresses the entity alignment using,

(31)

where for an entity , the similarities between and all entities are computed. is the expected pair of aligned entities. Also, for improving the character embeddings, attribute triple enrichment is performed by transitivity rule. The model is evaluated against datasets generated from multiple heterogeneous KGs, i.e., DBpedia, YAGO and Geonames which outperforms all the current state of the art methods.

Challenges.

The challenges of the above mentioned models are: (i) They are supervised and require a set of aligned entities or triples as seeds for training, meaning that, parallel data is necessary for these methods to work. (ii) Some of the models such as ITransE require the relations to be aligned between the KGs. However, in case of heterogeneous KGs such as DBpedia and Wikidata which consist of different sets of relations, it is a challenging task to have pre-aligned set of relations. (iii) Equivalent links (such as owl:equivalentProperty) between relations exist across KGs and none of the aforementioned methods exploit these links for better alignment. (iv) The methods considering attribute information for the alignment fail to exploit the type of the attributes such as text literals, numeric literals, etc. (v) The methods lack proper mechanisms to handle multi-valued object and attribute relations.

3.2 Entity - Word Embedding Alignment

CONV-augmented model [toutanova2015representing]

learns embeddings of words and entities into a single vector space using Convolutional Neural Network (CNN) with DistMult 

[yang2014embedding] for KG completion. CNN is trained on lexicalised dependency paths which are generated from the textual data treated as a sequence of words. The sentences are then annotated with the entities from the KG. The loss function is motivated by link prediction task and is given by,

(32)

where is the set of triples and denote the subject and object entities. Let and represent the set of knowledge base triples and textual relation triples respectively. Therefore the final loss function is defined as:

(33)

where is the regularization parameter and is the weighing factor of the textual relations. Therefore, CNN model learns the latent representation of words and entities based on the annotated dataset generated from FB15K-237 and ClueWeb12    [gabrilovich2013facc1].

Similarly, [han2016joint] uses TransE and CNN for joint learning of the entities and words in a unified vector space. TransE is used to learn the representation of the entities and the relations from the KG and the loss function w.r.t. this model is given by,

(34)

where indicates keeping the positive part and is a margin, and is the set of incorrect triples. To encode the text, given a sentence containing with a relation , the model takes word embeddings of the sentence as input, and after passing through CNN, outputs the embedding of the textual relation . The model further minimizes the loss between and which is formalized by,

(35)

The loss function over all the sentences in the text dataset is given by,

(36)

For a word in the given sentence, its input embedding is composed of its textual word embedding and its position embedding . Word position embedding of a word is generated from the position of that word in a sentence. Textual word embeddings are generated from pre-training of the text using skip-gram model. The CNN model comprises of an input layer, a convolutional layer and a pooling layer. The model learns the embedding using CNN on Freebase and New York times articles.

[yamada2016joint] comprises of three models namely skip-gram model for word similarity, KB model, and anchor context similarity to jointly map the entities and words into a continuous vector space. The training of the overall model focuses on maximizing the following objective function such that the resulting matrix to embed the entities and words,

(37)

The model extends the skip-gram model by using (i) KB graph model which learns the relations between the entities using the link structure, and (ii) anchor context model which aligns similar words and entities closer in the vector space by leveraging the KG anchors and context words. Given a sequence of words , the model aims to maximize the following objective function,

(38)

where is the size of the context window, and denote the target and the context word respectively. In the KB model, entities with similar incoming links are placed closer to each other in the vector space and is formalised as,

(39)

It is used to predict the incoming links given an entity . Finally, in the anchor text model, the model is trained to predict the context words of an entity pointed to by the target anchor. The objective function is as follows,

(40)

where A denotes a set of anchors in the KB, each of which contains a pair of a referent entity and a set of its context words . Wikipedia is used as a KG to train the model and evaluated against CoNLL and TAC2010 datasets for named entity disambiguation task. However, this approach suffers from ambiguity, i.e., same words or phrases can refer to different entities and vice-versa.

To address this problem, Multi-Prototype Mention Embedding (MPME) model [cao2017bridge] has been proposed which learns multiple sense embeddings for each entity mention by jointly modeling words from textual contexts and entities derived from a KG. Furthermore, a language model is used to disambiguate each mention to a sense. The goal of training MPME is to maximize the function below, and iteratively update three types of embeddings,

(41)

where denotes the word embeddings, denotes the entity embeddings, and denotes the word-mention representations. For entity representation, the skip-gram model is extended to a network by maximizing the log probability of being a neighbour entity and is formalized by,

(42)

This is because the neighboring entities exhibit similar role as context words in the skip-gram model. Next, given an anchor and its context words , we combine mention sense embeddings with its context word embeddings to predict the reference entity by extending CBOW model. The objective function is as follows:

(43)

where . If two mentions refer to similar entities and share similar contexts, they tend to be close in a vector space. On the other hand, given the annotated corpus , word or a mention sense is used to predict the context words by maximizing the following objective function,

(44)

where is obtained from anchors of Wikipedia articles. The model is trained on Wikipedia and was evaluated for entity linking, entity relatedness and word analogy tasks.

[cao2018joint] proposes a method in which joint representation learning of cross-lingual words and entities via distant supervision is performed using multi-lingual KGs. It is based on assumption that the more words and or entities two contexts share the more similar they are. The framework comprises of two steps: (i) Cross Lingual Supervision Data Generation which builds a bilingual network and generates comparable sentences based on cross-lingual links. Two regularizers are used to align the cross-lingual words and entities. The framework is built on the assumption that the more number of words or entities two context share, the more similar they are. The cross-lingual entity regularizer is given by,

(45)

where denotes cross-lingual contexts neighbor entities in different languages that linked to . The second regularizer is the cross lingual sentence regularizer. Comparable sentences provide cross-lingual co-occurrence of words which is used to learn similar embeddings for the words that frequently co-occur by minimizing the Euclidean distance as follows,

(46)

where are sentence embeddings. (ii) Joint Representation Learning, which learns the cross-lingual entities and words into a vector space. This model is also trained on Wikipedia and has been evaluated for natural language translation.

Challenges.

The challenges of the entity - word alignment models are as follows: (i) DNN based models suffer from huge vocabulary size of the words and entities as well as expensive training. (ii) The models are supervised and require manual annotation of the text with the entities from the KG. (iii) Most of the models use Wikipedia as the KG and generate an entity graph based on the Wikipedia hyperlinks and propose different ways of identifying the relevant context information. As a result only context words are projected in the vector space. (iii) Models which consider external information from the KGs such as Freebase use only a fraction of the triples available for all the entities. Hence, all the information encoded in the KG for an entity is not exploited.

4 Results and Discussion

This section summarizes the results of the methods discussed in the current survey as provided in their respective studies. These methods have been evaluated over the following tasks: entity alignment, KG completion, Named entity disambiguation (NED).

Datasets: Entity-entity alignment models are evaluated on real KGs such as DBpedia (DBP), LinkedGeoData(LGD), Geonames(GEO) and YAGO datasets. As discussed in [trisedya2019entity], three models JAPE, MTransE, and entity alignment using attribute embeddings are tested against DBP-LGD, DBP-GEO, DBP-YAGO which contain aligned entities111http://downloads.dbpedia.org/2016-10/links/. DBP-YAGO dataset contains 15,000 aligned entities whereas both DBP-LGD and DBP-GEO contain 10,000 aligned entities each. On the other hand, cross lingual entity alignment task has also been performed using the models MTransE, ITransE, JAPE, and BootEA on the DBP15K dataset. The statistics of these datasets are given in Table 1.

Dataset Entities

Attribute Triples

Relationship Triples

DBP-LGD LGD 24,309 90,054 10,084
DBP 22,748 166,008 19,594
DBP-GEO GEO 21,794 98,790 17,410
DBP 22,748 166,008 19,594

DBP-YAGO
YAGO 30,628 173,309 38,451
DBP 33,627 184,672 36,906

DBP15K (ZH-EN)
Chinese 66,469 379,684 153,929
English 98,125 567,755 237,674
DBP15K (JA-EN) Japanese 65,744 354,619 164,373
English 95,680 497,230 233,319
DBP15K (FR-EN) French 66,858 582,665 192,191
English 105,889 576,543 278,590
Table 1: Dataset Statistics
Models DBP-LGD DBP-GEO DBP-YAGO
Hits@1 Hits@10 MR Hits@1 Hits@10 MR Hits@1 Hits@10 MR
MTransE 33.29 34.32 10194 33.34 33.98 10240 33.46 34.32 7105
JAPE 33.33 33.35 5104 33.35 33.75 5088 33.35 33.37 5296
84.27 91.85 53 87.61 92.15 80 89.69 95.83 23
Models DBP(ZH-EN) DBP(JA-EN) DBP(FR-EN)
Hits@1 Hits@10 MRR Hits@1 Hits@10 MRR Hits@1 Hits@10 MRR
MTransE 30.83 61.41 0.364 27.86 57.45 0.349 24.41 55.55 0.335
ITransE 40.59 73.47 0.516 36.69 69.26 0.474 33.30 68.54 0.451
JAPE 41.18 74.46 0.49 36.25 68.50 0.476 32.39 66.68 0.43
BootEA 62.94 84.75 0.703 62.23 85.39 0.701 65.30 87.44 0.731
GCN 41.25 74.38 - 39.91 74.46 - 37.29 74.49 -
Table 2: Entity Alignment Task

Intuitively, it is easier to align entities with more content. The same has been reflected in the results presented in Table 2. The attribute information plays a vital role in the alignment task as it captures more semantics of the entities in a KG. Hence, and BootEA perform better than the other models. Also, it has been noticed that more the number of aligned seeds the better the performance of the model. The main advantage of the GCN model is that huge number of aligned seeds are not required for training. The values for MRR are not provided in the original paper, hence it is not presented in Table 2.

On the other hand, the presented entity-word embedding space alignment models are evaluated for different tasks such as entity relatedness, entity linking, NED, KG completion, and word translation. Since the models are trained with a focus to perform a certain task, no two aforementioned models use the same dataset. However, CONV-augmented model [toutanova2015representing] and joint model of TransE with CNN [han2016joint] focus on the link prediction task. But CONV-augmented model uses FB15K-237 dataset whereas the later uses FB15K dataset. It is noticed that both the models work better when textual data is included. For NED, [yamada2016joint] uses CoNLL and TAC2010 datasets and the results outperforms the state-of-the-art NED models. MPME [cao2017bridge] is evaluated for entity relatedness and word analogical reasoning. Lastly, the model [cao2018joint] is compared with MPME for entity relatedness task for the multilingual datasets and it works slightly better than MPME.

5 Conclusion

This paper presents a comprehensive analysis and comparison of the existing models for aligning embedding spaces w.r.t. the research questions mentioned in Sect. 2

. This section summarizes the lessons learnt from the comparison and analysis of different algorithms. In order to encode the structural differences of the input sources in entity - word vector spaces, most of the methods exploit the structure of the links between the entities from Wikipedia and propose different ways of capturing the contextual information of a certain entity mention in the text. Also, some models are trained with pre-aligned triples and sentences. However, the relations between the entities which are encoded in the KGs remain unexplored for most of the methods. On the other hand, models for vector space alignment of entities across multiple KGs follow supervised learning mechanisms and require a set of pre-aligned entities or triples as initial seeds. Not all the information encoded in the KGs is exploited in these models, such as multi-valued object and attribute relations. This also points to RQ3 but the methods do not consider the equivalent relations across KGs. As future work, these models could be further exploited to build various NLP applications such as machine translation. Furthermore, the challenges mentioned in the paper paved the way for promising research directions in alignment of embedding spaces.

References