Learning Multilingual Word Embeddings in a Latent Metric Space: A Geometric Approach

08/27/2018 ∙ by Pratik Jawanpuria, et al. ∙ Indian Institute Of Technology, Madras Microsoft 0

We propose a novel geometric approach for learning bilingual mappings given monolingual embeddings and a bilingual dictionary. Our approach decouples learning the transformation from the source language to the target language into (a) learning rotations for language-specific embeddings to align them to a common space, and (b) learning a similarity metric in the common space to model similarities between the embeddings. We model the bilingual mapping problem as an optimization problem on smooth Riemannian manifolds. We show that our approach outperforms previous approaches on the bilingual lexicon induction and cross-lingual word similarity tasks. Since we represent the rotated embeddings in a common latent space, our approach can easily represent multiple languages in a common space. We also show that these multilingual embeddings can be learned jointly given bilingual dictionaries for multiple language pairs. We demonstrate the effectiveness of the multilingual embeddings in one zero-shot word translation setting: word translation using these multilingual embeddings is better than word translation using a pivot language when no source-target bilingual dictionary is available, but source-pivot and pivot-target bilingual dictionaries are available.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Bilingual word embeddings are a useful tool in NLP that has attracted a lot of interest lately, due to a fundamental property: similar concepts/words across different languages are mapped close to each other in a common embedding space. Hence, they are useful for joint/transfer learning and sharing annotated data across languages in different NLP applications like machine translation

(Gu et al., 2018), building bilingual dictionaries (Mikolov et al., 2013b), mining parallel corpora (Conneau et al., 2018), text classification (Klementiev et al., 2012)

, sentiment analysis

(Zhou et al., 2015), and dependency parsing (Ammar et al., 2016).

Mikolov et al. (2013b)

empirically show that a linear transformation of embeddings from one language to another preserves the geometric arrangement of word embeddings. In a supervised setting, the transformation matrix,

, is learned given a small bilingual dictionary and their corresponding monolingual embeddings. Subsequently, many refinements to the bilingual mapping framework have been proposed. (Xing et al., 2015; Smith et al., 2017b; Conneau et al., 2018; Artetxe et al., 2016, 2017, 2018a, 2018b).

In this work, we propose a novel geometric approach for learning bilingual embeddings. We rotate the source and target language embeddings from their original vector spaces to a common latent space via language-specific orthogonal transformations. Furthermore, we define a similarity metric, the Mahalanobis metric, in this common space to refine the notion of similarity between a pair of embeddings. We achieve the above by learning the transformation matrix as follows:

, where and are the orthogonal transformations for target and source language embeddings, respectively, and is a positive definite matrix representing the Mahalanobis metric.

The proposed formulation has the following benefits:

  The learned similarity metric allows for a more effective similarity comparison of embeddings based on evidence from the data.

  A common latent space decouples the source and target language transformations, and naturally enables representation of word embeddings from both languages in a single vector space.

  We also show that the proposed method can be easily generalized to jointly learn multilingual embeddings, given bilingual dictionaries of multiple language pairs. We map multiple languages into a single vector space by learning the characteristics common across languages (the similarity metric) as well as language specific attributes (the orthogonal transformations).

The optimization problem resulting from our formulation involves orthogonal constraints on language-specific transformations ( for language ) as well as the symmetric positive-definite constraint on the metric . Instead of solving the optimization problem in the Euclidean space with constraints, we view it as an optimization problem in smooth Riemannian manifolds, which are well-studied topological spaces (Lee, 2003). The Riemannian optimization framework embeds the given constraints into the search space, and conceptually views the problem as an unconstrained optimization problem over the manifolds.

We evaluate our approach on different bilingual as well as multilingual tasks across multiple languages and datasets. The following is a summary of our findings:

  Our approach outperforms state-of-the-art supervised and unsupervised bilingual mapping methods on the bilingual lexicon induction as well as the cross-lingual word similarity tasks.

  An ablation analysis reveals that the following contribute to our model’s improved performance: (a) aligning the embedding spaces of different languages, (b) learning a similarity metric which induces a latent space, (c) performing inference in the induced latent space, and (d) formulating the tasks as a classification problem.

  We evaluate our multilingual model on an indirect word translation task: translation between a language pair that does not have a bilingual dictionary, but the source and target languages each possess a bilingual dictionary with a third, common pivot language. Our multilingual model outperforms a strong unsupervised baseline as well as methods based on adapting bilingual methods for this indirect translation task.

  Lastly, we propose a semi-supervised extension of our approach which further improves performance over the supervised approaches.

The rest of the paper is organized as follows. Section 2 discusses related work. The proposed framework, including problem formulations for bilingual and multilingual mappings, is presented in Section 3. The proposed Riemannian optimization algorithm is described in Section 4. In Section 5, we discuss our experimental setup. Section 6 presents the results of experiments on direct translation with our algorithms and analyzes the results. Section 7 presents experiments on indirect translation using our generalized multilingual algorithm. We discuss a semi-supervised extension to our framework in Section 8. Section 9 concludes the paper.

2 Related Work

Bilingual embeddings. Mikolov et al. (2013b) show that a linear transformation from embeddings of one language to another can be learned from a bilingual dictionary and corresponding monolingual embeddings by performing linear least-squares regression. A popular modification to this formulation constrains the transformation matrix to be orthogonal (Xing et al., 2015; Smith et al., 2017b; Artetxe et al., 2018a). This is known as the orthogonal Procrustes problem (Schönemann, 1966). Orthogonality preserves monolingual distances and ensures the transformation is reversible. Lazaridou et al. (2015) and Joulin et al. (2018)

optimize alternative loss functions in this framework.

Artetxe et al. (2018a) improves upon the Procrustes solution and propose a multi-step framework consisting of a series of linear transformations to the data. Faruqui and Dyer (2014) use Canonical Correlation Analysis (CCA) to learn linear projections from the source and target languages to a common space such that correlations between the embeddings projected to this space are maximized. Procrustes solution based approaches have been shown to perform better than CCA-based approaches (Artetxe et al., 2016, 2018a).

We view the problem of mapping the source and target languages word embeddings as (a) aligning the two language spaces, and (b) learning a similarity metric in this (learned) common space. We accomplish this by learning suitable language-specific orthogonal transformations (for alignment) and a symmetric positive-definite matrix (as Mahalanobis metric). The similarity metric is useful in addressing the limitations of mapping to a common latent space under orthogonality constraints, an issue discussed by Doval et al. (2018). While Doval et al. (2018) learn a second correction transformation by assuming the average of the projected source and target embeddings as the true latent representation, we make no such assumption and learn the similarity metric from the data. Kementchedjhieva et al. (2018), recently, employed the generalized Procrustes analysis (GPA) method (Gower, 1975) for the bilingual mapping problem. GPA maps both the source and target language embeddings to a latent space, which is constructed by averaging over the two language spaces.

Unsupervised methods have shown promising results, matching supervised methods in many studies. Artetxe et al. (2017) proposed a bootstrapping method for bilingual lexicon induction problem using a small seed bilingual dictionary. Subsequently, Artetxe et al. (2018b) and Hoshen and Wolf (2018) have proposed initialization methods that eliminate the need for a seed dictionary. Zhang et al. (2017b) and Grave et al. (2018) proposed aligning the the source and target language word embeddings by optimizing the the Wasserstein distance. Unsupervised methods based on adversarial training objectives have also been proposed (Barone, 2016; Zhang et al., 2017a; Conneau et al., 2018; Chen and Cardie, 2018). A recent work by Søgaard et al. (2018) discusses cases in which unsupervised bilingual lexicon induction does not lead to good performance.

Multilingual embeddings. Ammar et al. (2016) and Smith et al. (2017a) adapt bilingual approaches for representing embeddings of multiple languages in a common vector space by designating one of the languages as a pivot language. In this simple approach, bilingual mappings are learned independently from all other languages to the pivot language. GPA based method (Kementchedjhieva et al., 2018) may also be used to jointly transform multiple languages to a common latent space. However, this requires an -way dictionary to represent languages. In contrast, the proposed approach requires only pairwise bilingual dictionaries such that every language under consideration is represented in at least one bilingual dictionary.

The above-mentioned approaches are referred to as offline since the monolingual and bilingual embeddings are learned separately. In contrast online approaches directly learn a bilingual/multilingual embedding from parallel corpora (Hermann and Blunsom, 2014; Huang et al., 2015; Duong et al., 2017), optionally augmented with monolingual corpora (Klementiev et al., 2012; Chandar et al., 2014; Gouws et al., 2015). In this work, we focus on offline approaches.

3 Learning Latent Space Representation

In this section, we first describe the proposed geometric framework to learn bilingual embeddings. We then present its generalization to the multilingual setting.

3.1 Geometry-aware Factorization

We propose to transform the word embeddings from the source and target languages to a common space in which the similarity of words embeddings may be better learned. To this end, we align the source and target languages embedding spaces by learning language-specific rotations: and for the source and target languages embeddings, respectively. Here represents the space of -dimensional orthogonal matrices. An embedding in the source language is thus transformed to . Similarly, for an embedding in the target language: . These orthogonal transformations map (align) both the source and target language embeddings to a common space in which we learn a data-dependent similarity measure, as discussed below.

We learn a Mahalanobis metric to refine the notion of similarity111Mahalanobis metric generalizes the notion of cosine similarity. For given two unit normalized vectors , their cosine similarity is given by , where

is the identity matrix. If this space is endowed with a metric

, then . between the two transformed embeddings and

. The Mahalanobis metric incorporates the feature correlation information from the given training data. This allows for a more effective similarity comparison of language embeddings (than the cosine similarity). In fact, Mahalanobis similarity measure reduces to cosine similarity when the features are uncorrelated and have unit variance, which may be a strong assumption in real-world applications.

Søgaard et al. (2018) have argued that monolingual embedding spaces across languages are not necessarily isomorphic, hence learning a orthogonal transformation alone may not be sufficient. A similarity metric learned from the data may mitigate this limitation to some extent by learning a correction in the latent space.

Since is a Mahalanobis metric in space, it is a symmetric positive-definite matrix , i.e., . The similarity between the embeddings and in the proposed setting is computed as . The source to the target language transformation is expressed as . For an embedding in the source language, its transformation to the target language space is given by .

The proposed factorization of the transformation , where and , is sometimes referred to as polar factorization of a matrix (Bonnabel and Sepulchre, 2010; Meyer et al., 2011)

. Polar factorization is similar to the singular value decomposition (SVD) The key difference is that SVD enforces

to be a diagonal matrix with non-negative entries, which accounts for only the axis rescaling instead of full feature correlation and is more difficult to optimize (Mishra et al., 2014; Harandi et al., 2017).

3.2 Latent Space Interpretation

Computing the Mahalanobis similarity measure is equivalent to computing the cosine similarity in a special latent (feature) space. This latent space is defined by the transformation , where the mapping is defined as . Since is a symmetric positive-definite matrix, is well-defined and unique.

Hence, our model may equivalently be viewed as learning a suitable latent space as follows. The source and target languages embeddings are linearly transformed as and , respectively. The functions and map the source and target language embeddings, respectively, to a common latent space. We learn the matrices , , and corresponding to the transformations , , and , respectively. Since the matrix is embedded implicitly in this latent feature space, we employ the usual cosine similarity measure, computed as . It should be noted that this is equal to .

3.3 A Classification Model

We assume a small bilingual dictionary (of size ) is available as the training data. Let and denote the embeddings of the dictionary words from the source and target languages, respectively. Here, and are the number of unique words in the source and target languages present in the dictionary.

We propose to model the bilingual word embedding mapping problem as a binary classification problem. Consider word embeddings and from the source and target languages, respectively. If the words corresponding to and constitute a translation pair then the pair belongs to the positive class, else it belongs to the negative class. The prediction function for the pair is . We create a binary label matrix whose -th entry corresponds to the correctness of mapping the -th embedding in to the -th embedding in . Our overall optimization problem is as follows:

(1)

where is the Frobenius norm and is the regularization parameter. We employ the square loss function since it is smooth and relatively easier to optimize. It should be noted that our prediction function is invariant of the direction of mapping, i.e., . Hence, our model learns bidirectional mapping. The transformation matrix from the target to the source language is given by , i.e., .

The computation complexity of computing the loss term in (3.3) is linear in , the size of the given bilingual dictionary. This is because the loss term in (3.3) can be re-written as follows:

(2)

where represents the -th column in , represents the -th column in , is the set of row-column indices corresponding to entry value in , and denotes the trace of a matrix. The complexity of computing the first and third term in (2) is and , respectively. Similarly, the computation cost of the gradient of the objective function in (3.3) is also linear in . Hence, our framework can efficiently leverage information from all the negative samples.

In the next section, we discuss a generalization of our approach to multilingual settings.

3.4 Generalization to Multilingual Setting

In this section, we propose a unified framework for learning mappings when bilingual dictionaries are available for multiple language pairs. We formalize the setting as an undirected, connected graph , where each node represents a language and an edge represents the availability of a bilingual dictionary between the corresponding pair of languages. Given all bilingual dictionaries corresponding to the edge set , we propose to align the embedding spaces of all languages in the node set and learn a common latent space for them.

To this end, we jointly learn an orthogonal transformation for every language and the Mahalanobis metric . The latter is common across all languages in the multilingual setu p and helps incorporate information across languages in the latent space. It should be noted that the transformation is employed for all the bilingual mapping problems in this graph associated with . The transformation from to is given by . Further, we are also able to obtain transformations between any language pair in the graph, even if a bilingual dictionary between them is not available.

Let be222For notational convenience, the number of unique words in every language in all their dictionaries is kept same (). the embeddings of the dictionary words of in the dictionary corresponding to edge . Let be the binary label matrix corresponding to the dictionary between and . The proposed optimization problem for multilingual setting is

(3)

We term our approach as Geometry-aware Multilingual Mapping (GeoMM). We next discuss the optimization algorithm for solving the bilingual mapping problem (3.3) as well as its generalization to the multilingual setting (3.4).

4 Optimization Algorithm

The geometric constraints and in the proposed problems (3.3) and (3.4) have been studied as smooth Riemannian manifolds, which are well explored topological spaces (Edelman et al., 1998). The orthogonal matrices lie in, what is popularly known as, the -dimensional Orthogonal manifold. The space of symmetric positive definite matrices is known as the Symmetric Positive Definite manifold. The Riemannian optimization framework embeds such constraints into the search space and conceptually views the problem as an unconstrained problem over the manifolds. In the process, it is able to exploit the geometry of the manifolds and the symmetries involved in them. Absil et al. (2008) discuss several tools to systematically optimize such problems. We optimize the problems (3.3) and (3.4) using the Riemannian conjugate gradient algorithm (Absil et al., 2008; Sato and Iwai, 2013).

Publicly available toolboxes such as Manopt (Boumal et al., 2014), Pymanopt (Townsend et al., 2016) or ROPTLIB (Huang et al., 2016) have scalable off-the-shelf generic implementations of several Riemannian optimization algorithms. We employ Pymanopt in our experiments, where we only need to supply the objective function.

5 Experimental Settings

In this section, we describe the evaluation tasks, the datasets used, and the experimental details of the proposed approach.

Evaluation tasks. We evaluate our approach on several tasks:

  To evaluate the quality of the bilingual mappings generated, we evaluate our algorithms primarily for the bilingual lexicon induction (BLI) task, i.e., word translation task and compare Precision@1 with previously reported state-of-the-art results on benchmark datasets (Dinu and Baroni, 2015; Artetxe et al., 2016; Conneau et al., 2018).

  We also evaluate on the cross-lingual word similarity task using the SemEval 2017 dataset.

  To ensure that quality of embeddings on monolingual tasks does not degrade, we evaluate the quality of our embeddings on the monolingual word analogy task (Artetxe et al., 2016).

  To illustrate the utility of representing embeddings of multiple language in a single latent space, we evaluate our multilingual embeddings on the one-hop translation task, i.e., a direct dictionary between the source and target languages is not available, but the source and target languages share a bilingual dictionary with a pivot language.

Datasets. For bilingual and multilingual experiments, we report results on the following widely used, publicly available datasets:

  VecMap: This dataset was originally made available by Dinu and Baroni (2015) with subsequent extensions by other researchers (Artetxe et al., 2017, 2018a). It contains bilingual dictionaries from English (en) to four languages: Italian (it), German (de), Finnish (fi) and Spanish (es). The detailed experimental settings for this BLI task can be found in Artetxe et al. (2018b).

  MUSE: This dataset was originally made available by Conneau et al. (2018). It contains bilingual dictionaries from English to many languages such as Spanish (es), French (fr), German (de), Russian (ru), Chinese (zh), and vice versa. The detailed experimental settings for this BLI task can be found in Conneau et al. (2018). This dataset also contains bilingual dictionaries between several other European languages, which we employ in multilingual experiments.

Experimental settings of GeoMM. We select the regularization hyper-parameter from the set by evaluation on a validation set created out of the training dataset. For inference, we use the (normalized) latent space representations of embeddings () to compute similarity between the embeddings. For inference in the bilingual lexicon induction task, we employ the Cross-domain Similarity Local Scaling (CSLS) similarity score (Conneau et al., 2018) in nearest neighbor search, unless otherwise mentioned. CSLS has been shown to perform better than other methods in mitigating the hubness problem (Dinu and Baroni, 2015) for search in high dimensional spaces.

While discussing experiments, we denote our bilingual mapping algorithm (Section 3.3) as GeoMM and its generalization to the multilingual setting (Section 3.4) as GeoMM. Our code is available at https://github.com/anoopkunchukuttan/geomm.

6 Direct Translation: Results and Analysis

In this section, we evaluate the performance of our approach on two tasks: bilingual lexicon induction and cross-lingual word similarity. We also perform ablation tests to understand the effect of major sub-components of our algorithm. We verify the monolingual performance of the mapped embeddings generated by our algorithm.

Method en-es es-en en-fr fr-en en-de de-en en-ru ru-en en-zh zh-en avg.
Supervised
GeoMM
GeoMM
Procrustes
MSF-ISF
MSF
MSF
Unsupervised
SL-unsup
Adv-Refine
Grave et al. (2018)
Hoshen and Wolf (2018) f.c. f.c.
Chen and Cardie (2018)
Table 1: Precision for BLI on the MUSE dataset. Some notations: (a) ‘’ implies the original paper does not report result for the corresponding language pair, (b) ‘f.c.’ implies the original paper reports their algorithm failed to converge, (c) ‘’ implies that we could not run the authors’ code successfully for the language pairs, and (d) ‘’ implies the results of the algorithm are reported in the original paper. The remaining results were obtained with the official implementation from the authors.
Method en-it en-de en-fi en-es avg.
Supervised
GeoMM
GeoMM
Procrustes
MSF-ISF
MSF
MSF
GPA
CCA-NN
Unsupervised
SL-unsup
Adv-Refine
Table 2: Precision for BLI on the VecMap dataset. The results of MSF-ISF, SL-unsup, CCA-NN (Faruqui and Dyer, 2014), and Adv-Refine are reported by Artetxe et al. (2018b). CCA-NN employs nearest neighbor retrieval procedure. The results of GPA are reported by Kementchedjhieva et al. (2018).
Method en-it en-de en-fi en-es
GeoMM
(1)
(2)
(3)
(4) Targt space inf.
(5) Regression
Table 3: Ablation test results: Precision for BLI on the VecMap dataset.

6.1 Bilingual Lexicon Induction (BLI)

We compare GeoMM with the best performing supervised methods. We also compare with unsupervised methods as they have been shown to be competitive with supervised methods. The following baselines are compared in the BLI experiments.

  Procrustes: the bilingual mapping is learned by solving the orthogonal Procrustes problem (Xing et al., 2015; Artetxe et al., 2016; Smith et al., 2017b; Conneau et al., 2018).

  MSF: the Multi-Step Framework proposed by Artetxe et al. (2018a), with CSLS retrieval. It improves upon the original system (MSF-ISF) by Artetxe et al. (2018a), which employs inverted softmax function (ISF) score for retrieval.

  Adv-Refine: unsupervised adversarial training approach, with bilingual dictionary refinement (Conneau et al., 2018).

  SL-unsup: state-of-the-art self-learning (SL) unsupervised method (Artetxe et al., 2018b), employing structural similarity of the embeddings.

We also include results of the correction algorithm proposed by Doval et al. (2018) on the MSF results (referred to as MSF). In addition, we also include results of several recent works (Kementchedjhieva et al., 2018; Grave et al., 2018; Chen and Cardie, 2018; Hoshen and Wolf, 2018) on MUSE and VecMap datasets, which are reported in the original papers.

Results on MUSE dataset: Table 1 reports the results on the MUSE dataset. We observe that our algorithm GeoMM outperforms all the supervised baselines. GeoMM also obtains significant improvements over unsupervised approaches.

The performance of the multilingual extension, GeoMM, is almost equivalent to the bilingual GeoMM. This means that in spite of multiple embeddings being jointly learned and represented in a common space, its performance is still better than existing bilingual approaches. Thus, our multilingual framework is quite robust since languages from diverse language families have been embedded in the same space. This can allow downstream applications to support multiple languages without performance degradation. Even if bilingual embeddings are represented in a single vector space using a pivot language, the embedding quality is inferior compared to GeoMM. We discuss more multilingual experiments in Section 7.

Results on VecMap dataset: Table 2 reports the results on the VecMap dataset. We observe that GeoMM obtains the best performance in each language pair, surpassing state-of-the-art results reported on this dataset. GeoMM also outperforms GPA (Kementchedjhieva et al., 2018), which also learns bilingual embeddings in a latent space.

6.2 Ablation Tests

We next study the impact of different components of our framework by varying one component at a time. The results of these tests on VecMap dataset are shown in Table 3 and are discussed below.

(1) Classification with unconstrained . We learn the transformation directly as follows:

(4)

The performance drops in this setting compared to GeoMM, underlining the importance of the proposed factorization and the latent space representation. In addition, the proposed factorization helps GeoMM generalize to the multilingual setting (GeoMM). Further, we also observe that the overall performance of this simple classification based model is better than recent supervised approaches such as Procrustes, MSF-ISF (Artetxe et al., 2018a), and GPA (Kementchedjhieva et al., 2018). This suggests that a classification model is better suited for the BLI task.

Next, we look at both components of the factorization.

(2) Without language specific rotations. We enforce in (3.3) for GeoMM, i.e., . We observe a significant drop in performance, which highlights the need for aligning the feature space of different languages.

(3) Without similarity metric. We enforce in (3.3) for GeoMM, i.e., . It can be observed that the results are poor, which underlines the importance of a suitable similarity metric in the proposed classification model.

(4) Target space inference. We learn by solving (3.3), as in GeoMM. During the retrieval stage, the similarity between embeddings is computed in the target space, i.e., given embeddings and from the source and target languages, respectively, we compute the similarity of the (normalized) vectors and . It should be noted that GeoMM computes similarity of and in the latent space, i.e., it computes the similarity of the (normalized) vectors and , respectively. We observe that inference in the target space degrades the performance. This shows that the latent space representation captures useful information and allows GeoMM to obtain much better accuracy.

(5) Regression with proposed factorization. We pose BLI as a regression problem, as done in previous approaches, by employing the following loss function: . We observe that its performance is worse than the classification baseline (). The classification setting directly models the similarity score via the loss function, and hence corresponds with inference more closely. This result further reinforces the observation made in the first ablation test.

To summarize, the proposed modeling choices are better than the alternatives compared in the ablation tests.

6.3 Cross-lingual Word Similarity

The results on the cross-lingual word similarity task using the SemEval 2017 dataset (Camacho-Collados et al., 2017) are shown in Table 4. We observe that GeoMM performs better than Procrustes, MSF, and the SemEval 2017 baseline NASARI (Camacho-Collados et al., 2016). It is also competitive with Luminoso_run2 (Speer and Lowry-Duda, 2017), the best reported system on this dataset. It should be noted that NASARI and luminoso_run2 use additional knowledge sources like BabelNet and ConceptNet.

Method en-es en-de en-it
NASARI
Luminoso_run2
Procrustes
MSF
Joulin et al. (2018)
GeoMM
Table 4: Pearson correlation coefficient for the SemEval 2017 cross-lingual word similarity task.
Method Accuracy (%)
Original English embeddings
Procrustes
MSF
GeoMM
Table 5: Results on the monolingual word analogy task.

6.4 Monolingual Word Analogy

Table 5 shows the results on the English monolingual analogy task after obtaining iten mapping on the VecMap dataset Mikolov et al. (2013a); Artetxe et al. (2016). We observe that there is no significant drop in the monolingual performance by the use of non-orthogonal mappings compared to monolingual embeddings as well as other bilingual embeddings (Procrustes and MSF).

7 Indirect Translation: Results and Analysis

In the previous sections, we have established the efficacy of our approach for bilingual mapping problem when a bilingual dictionary between the source and target languages is available. We also showed that our proposed multilingual generalization (Section 3.4) performs well in this scenario. In this section, we explore if our multilingual generalization is beneficial when a bilingual dictionary is not available between the source and the target, in other words, indirect translation. For this evaluation, our algorithm learns a single model for various language pairs such that word embeddings of different languages are transformed to a common latent space.

Evaluation Task: One-hop Translation

We consider the BLI task from language to language in the absence of a bilingual lexicon between them. We, however, assume the availability of lexicons for - and -, where is a pivot language.

As baselines, we adapt any supervised bilingual approach (Procrustes, MSF, and the proposed GeoMM) to the one-hop translation setting by considering their following variants:

  Composition (): Using the given bilingual approach, we learn the and transformations as and , respectively. Given an embedding from , the corresponding embedding in is obtained by a composition of the transformations, i.e., . This is equivalent to computing the similarity of and embeddings in the embedding space. Recently, Smith et al. (2017a) explored this technique with the Procrustes algorithm.

  Pipeline (): Using the given bilingual approach, we learn the and transformations as and , respectively. Given a word embedding from , we infer its translation embedding in . Then, the corresponding embedding of in is .

As discussed in Section 3.4, our framework allows the flexibility to jointly learn the common latent space of multiple languages, given bilingual dictionaries of multiple language pairs. Our multilingual approach, GeoMM, views this setting as a graph with three nodes and two edges -- (dictionaries).

Method fr-it-pt it-de-es es-pt-fr avg.
SL-unsup
Composition
Procrustes
MSF
GeoMM
Pipeline
Procrustes
MSF
GeoMM
GeoMM
Table 6: Indirect translation: Precision for BLI.

Experimental Settings

We experiment with the following one-hop translation cases: (a) fr-it-pt, (b) it-de-es, and (c) es-pt-fr (read the triplets as --). The training/test dictionaries and the word embeddings are from the MUSE dataset. In order to minimize direct transfer of information from to , we generate - and - training dictionaries such that they do not have any word in common. The training dictionaries have the same size as the - and - dictionaries provided in the MUSE dataset while the test dictionaries have entries.

Method en-es es-en en-fr fr-en en-de de-en en-ru ru-en en-zh zh-en en-it it-en avg.
RCSLS
GeoMM
GeoMM
Table 7: Comparison of GeoMM and GeoMM with RCSLS (Joulin et al., 2018). Precision for BLI is reported. The results of RCSLS are reported in the original paper. The results of language pairs en-it and it-en are on the VecMap dataset, while others are on the MUSE dataset.

Results and Analysis

Table 6 shows the results of the one-hop translation experiments. We observe that GeoMM outperforms pivoting methods ( and ) built on top of MSF and Procrustes for all language pairs. It should be noted that pivoting may lead to cascading of errors in the solution, whereas learning a common embedding space jointly mitigates this disadvantage. This is reaffirmed by our observation that GeoMM performs significantly better than GeoMM (cmp) and GeoMM (pip).

Since unsupervised methods have been shown to be competitive with supervised methods, they can be an alternative to pivoting. Indeed, we observe that the unsupervised method SL-unsup is better than the pivoting methods though it used no bilingual dictionaries. On the other hand, GeoMM is better than the unsupervised methods too. It should be noted that the unsupervised methods use much larger vocabulary than GeoMM during the training stage.

We also experimented with scenarios where some words from occur in both - and - training dictionaries. In these cases too, we observed that GeoMM perform better than other methods. We have not included these results due to space constraints.

8 Semi-supervised GeoMM

In this section, we discuss an extension of GeoMM, which benefits from unlabeled data. For the bilingual mapping problem, unlabeled data is available in the form of vocabulary lists for both the source and target languages. Existing unsupervised and semi-supervised techniques (Artetxe et al., 2017, 2018b; Joulin et al., 2018; Hoshen and Wolf, 2018) have an iterative refinement procedure that employs the vocabulary lists to augment the dictionary with positive or negative mappings.

Given a seed bilingual dictionary, we implement a bootstrapping procedure that iterates over the following two steps until convergence:

  1. Learn the GeoMM model by solving the proposed formulation (3.3) with the current bilingual dictionary.

  2. Compute a new bilingual dictionary from the vocabulary lists, using the (current) GeoMM model for retrieval. The seed dictionary along with this new dictionary is used in the next iteration.

In order to keep the computational cost low, we restrict the vocabulary list to most frequent words for both the languages (Artetxe et al., 2018b; Hoshen and Wolf, 2018). In addition, we perform bidirectional dictionary induction (Artetxe et al., 2018b; Hoshen and Wolf, 2018). We track the model’s performance on a validation set to avoid overfitting and use it as a criterion for convergence of the bootstrap procedure.

We evaluate the proposed semi-supervised GeoMM algorithm (referred to as GeoMM) on the bilingual lexicon induction task on MUSE and VecMap datasets. The bilingual dictionary for training is split into the seed dictionary and the validation set. We set , which works well in practice.

We compare GeoMM with RCSLS, a recently proposed state-of-the-art semi-supervised algorithm by Joulin et al. (2018). RCSLS directly optimizes the CSLS similarity score (Conneau et al., 2018), which is used during retrieval stage for GeoMM, among other algorithms. On the other hand, GeoMM optimizes a simpler classification based square loss function (refer Section 3.3). In addition to the training dictionary, RCSLS uses the full vocabulary list of the source and target languages ( words each) during training.

The results are reported in Table 7. We observe that the overall performance of GeoMM is slightly better than RCSLS. In addition, our supervised approach GeoMM performs slightly worse than RCSLS, though it does not have the advantage of learning from unlabeled data, as is the case for RCSLS and GeoMM. We also notice that GeoMM improves upon GeoMM in almost all language pairs.

Method en-it en-de en-fi en-es avg.
GeoMM
GeoMM
Table 8: Precision for BLI on the VecMap dataset.

We also evaluate GeoMM on the VecMap dataset. The results are reported in Table 8. To the best of our knowledge, GeoMM obtains state-of-the-art results on the VecMap dataset.

9 Conclusion and Future Work

In this work, we develop a framework for learning multilingual word embeddings by aligning the embeddings for various languages in a common space and inducing a Mahalanobis similarity metric in the common space. We view the translation of embeddings from one language to another as a series of geometrical transformations and jointly learn the language-specific orthogonal rotations and the symmetric positive definite matrix representing the Mahalanobis metric. Learning such transformations can also be viewed as learning a suitable common latent space for multiple languages. We formulate the problem in the Riemannian optimization framework, which models the above transformations efficiently.

We evaluate our bilingual and multilingual algorithms on the bilingual lexicon induction and the cross-lingual word similarity tasks. The results show that our algorithm outperforms existing approaches on multiple datasets. In addition, we demonstrate the efficacy of our multilingual algorithm in a one-hop translation setting for bilingual lexicon induction, in which a direct dictionary between the source and target languages is not available. The semi-supervised extension of our algorithm shows that our framework can leverage unlabeled data to obtain further improvements. Our analysis shows that the combination of the proposed transformations, inference in the induced latent space, and modeling the problem in classification setting allows the proposed approach to achieve state-of-the-art performance.

In future, an unsupervised extension to our approach can be explored. Optimizing the CSLS loss function (Joulin et al., 2018) within our framework can be investigated to address the hubness problem. We plan to work on downstream applications like text classification, machine translation, etc., which may potentially benefit from the proposed latent space representation of multiple languages by sharing annotated resources across languages.

References

  • Absil et al. (2008) Pierre-Antoine Absil, Robert Mahony, and Rodolphe Sepulchre. 2008. Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton, NJ.
  • Ammar et al. (2016) Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings. Technical report, arXiv preprint arXiv:1602.01925.
  • Artetxe et al. (2016) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In

    Proceedings of the Conference on Empirical Methods in Natural Language Processing

    , pages 2289–2294.
  • Artetxe et al. (2017) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 451–462.
  • Artetxe et al. (2018a) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    , pages 5012–5019.
  • Artetxe et al. (2018b) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 789–798. URL: https://github.com/artetxem/vecmap.
  • Barone (2016) Antonio Valerio Miceli Barone. 2016.

    Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders.

    In Proceedings of the 1st Workshop on Representation Learning for NLP.
  • Bonnabel and Sepulchre (2010) Silvère Bonnabel and Rodolphe Sepulchre. 2010.

    Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank.

    SIAM Journal on Matrix Analysis and Applications, 31(3):1055–1070.
  • Boumal et al. (2014) Nicolas Boumal, Bamdev Mishra, Pierre-Antoine Absil, and Rodolphe Sepulchre. 2014. Manopt, a Matlab toolbox for optimization on manifolds.

    Journal of Machine Learning Research

    , 15(Apr):1455–1459.
  • Camacho-Collados et al. (2017) José Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. SemEval-2017 Task 2: Multilingual and Cross-lingual Semantic Word Similarity. In Proceedings of the 11th International Workshop on Semantic Evaluation.
  • Camacho-Collados et al. (2016) José Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Nasari: Integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities. Artificial Intelligence, 240:36–64.
  • Chandar et al. (2014) Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Proceedings of the Advances in Neural Information Processing Systems, pages 1853–1861.
  • Chen and Cardie (2018) Xilun Chen and Claire Cardie. 2018. Unsupervise multilingual word embeddings. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
  • Conneau et al. (2018) Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proceedings of the International Conference on Learning Representations. URL: https://github.com/facebookresearch/MUSE.
  • Dinu and Baroni (2015) Georgiana Dinu and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Workshop track of International Conference on Learning Representations.
  • Doval et al. (2018) Yerai Doval, Jose Camacho-Collados, Luis Espinosa-Anke, and Steven Schockaert. 2018. Improving cross-lingual word embeddings by meeting in the middle. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. URL: https://github.com/yeraidm/meemi.
  • Duong et al. (2017) Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2017. Multilingual training of crosslingual word embeddings. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 894–904.
  • Edelman et al. (1998) Alan Edelman, Tomás A. Arias, and Steven T. Smith. 1998. The geometry of algorithms with orthogonality constraints. SIAM Journal on Matrix Analysis and Applications, 20(2):303–353.
  • Faruqui and Dyer (2014) Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471.
  • Gouws et al. (2015) Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representations without word alignments. In Proceedings of the International Conference on Machine Learning, pages 748–756.
  • Gower (1975) John C. Gower. 1975. Generalized procrustes analysis. Psychometrika, 40(1):33–51.
  • Grave et al. (2018) Edouard Grave, Armand Joulin, and Quentin Berthet. 2018. Unsupervised alignment of embeddings with Wasserstein Procrustes. Technical report, arXiv preprint arXiv:1805.11222.
  • Gu et al. (2018) Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor OK Li. 2018.

    Universal neural machine translation for extremely low resource languages.

    In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
  • Harandi et al. (2017) Mehrtash Harandi, Mathieu Salzmann, and Richard Hartley. 2017. Joint dimensionality reduction and metric learning: A geometric take. In Proceedings of the International Conference on Machine Learning.
  • Hermann and Blunsom (2014) Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual models for compositional distributed semantics. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 58–68.
  • Hoshen and Wolf (2018) Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 469–478.
  • Huang et al. (2015) Kejun Huang, Matt Gardner, Evangelos E. Papalexakis, Christos Faloutsos, Nikos D. Sidiropoulos, Tom M. Mitchell, Partha Pratim Talukdar, and Xiao Fu. 2015. Translation invariant word embeddings. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1084–1088.
  • Huang et al. (2016) Wen Huang, Pierre-Antoine Absil, Kyle A. Gallivan, and Paul Hand. 2016. ROPTLIB: An object-oriented C++ library for optimization on Riemannian manifolds. Technical Report FSU16-14.v2, Florida State University.
  • Joulin et al. (2018) Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Edouard Grave, and Hervè Jègou. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
  • Kementchedjhieva et al. (2018) Y. Kementchedjhieva, S. Ruder, R. Cotterell, and A. Søgaard. 2018. Generalizing Procrustes analysis for better bilingual dictionary induction. In Proceedings of the SIGNLL Conference on Computational Natural Language Learning.
  • Klementiev et al. (2012) Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of the International Conference on Computational Linguistics: Technical Papers, pages 1459–1474.
  • Lazaridou et al. (2015) Angeliki Lazaridou, Georgiana Dinu, and Marco Baroni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1959–1970.
  • Lee (2003) John M. Lee. 2003. Introduction to Smooth Manifolds, second edition. Springer-Verlag, New York.
  • Meyer et al. (2011) Gilles Meyer, Silvère Bonnabel, and Rodolphe Sepulchre. 2011. Linear regression under fixed-rank constraints: A Riemannian approach. In Proceedings of the International Conference on Machine Learning, pages 545–552.
  • Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. Technical report, arXiv preprint arXiv:1301.3781.
  • Mikolov et al. (2013b) Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. Technical report, arXiv preprint arXiv:1309.4168.
  • Mishra et al. (2014) Bamdev Mishra, Gilles Meyer, Silvère Bonnabel, and Rodolphe Sepulchre. 2014. Fixed-rank matrix factorizations and Riemannian low-rank optimization. Computational Statistics, 29(3):591–621.
  • Sato and Iwai (2013) Hiroyuki Sato and Toshihiro Iwai. 2013. A new, globally convergent Riemannian conjugate gradient method. Optimization: A Journal of Mathematical Programming and Operations Research, 64(4):1011–1031.
  • Schönemann (1966) Peter H. Schönemann. 1966. A generalized solution of the orthogonal Procrustes problem. Psychometrika, 31(1):1–10.
  • Smith et al. (2017a) Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017a. Aligning the fastText vectors of 78 languages. URL: https://github.com/Babylonpartners/fastText_multilingual.
  • Smith et al. (2017b) Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017b. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the International Conference on Learning Representations.
  • Søgaard et al. (2018) Anders Søgaard, Sebastian Ruder, and Ivan Vulić. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 778–788.
  • Speer and Lowry-Duda (2017) Robert Speer and Joanna Lowry-Duda. 2017. ConceptNet at SemEval-2017 Task 2: Extending word embeddings with multilingual relational knowledge. In Proceedings of the 11th International Workshop on Semantic Evaluations.
  • Townsend et al. (2016) James Townsend, Niklas Koep, and Sebastian Weichwald. 2016. Pymanopt: A python toolbox for optimization on manifolds using automatic differentiation. Journal of Machine Learning Research, 17(137):1–5. URL: https://pymanopt.github.io.
  • Xing et al. (2015) Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011.
  • Zhang et al. (2017a) Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1959–1970.
  • Zhang et al. (2017b) Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1934–1945.
  • Zhou et al. (2015) Huiwei Zhou, Long Chen, Fulin Shi, and Degen Huang. 2015. Learning bilingual sentiment word embeddings for cross-language sentiment classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing, pages 430–440.