Unsupervised Multilingual Alignment using Wasserstein Barycenter

01/28/2020 ∙ by Xin Lian, et al. ∙ University of Waterloo Borealis AI 0

We study unsupervised multilingual alignment, the problem of finding word-to-word translations between multiple languages without using any parallel data. One popular strategy is to reduce multilingual alignment to the much simplified bilingual setting, by picking one of the input languages as the pivot language that we transit through. However, it is well-known that transiting through a poorly chosen pivot language (such as English) may severely degrade the translation quality, since the assumed transitive relations among all pairs of languages may not be enforced in the training process. Instead of going through a rather arbitrarily chosen pivot language, we propose to use the Wasserstein barycenter as a more informative ”mean” language: it encapsulates information from all languages and minimizes all pairwise transportation costs. We evaluate our method on standard benchmarks and demonstrate state-of-the-art performances.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many natural language processing tasks, such as part-of-speech tagging, machine translation and speech recognition, rely on learning a distributed representation of words. Recent developments in computational linguistics and neural language modeling have shown that word embeddings can capture both semantic and syntactic information. This led to the development of the zero-shot learning paradigm as a way to address the manual annotation bottleneck in domains where other vector-based representations must be associated with word labels. This is a fundamental step to make natural language processing more accessible. A key input for machine translation tasks consists of embedding vectors for each word. MikolovSCCD13 were the first to release their pre-trained model and gave a distributed representation of words. After that, more software for training and using word embeddings emerged.

The rise of continuous word embedding representations has revived research on the bilingual lexicon alignment problem 

[Rapp1995, Fung1995], where the initial goal was to learn a small dictionary of a few hundred words by leveraging statistical similarities between two languages. MikolovLS13 formulated bilingual word embedding alignment as a quadratic optimization problem that learns an explicit linear mapping between word embeddings, which enables us to even infer meanings of out-of-dictionary words [Zhang et al.2016, Dinu et al.2015, Mikolov et al.2013a]. Xing2015NormalizedWE showed that restricting the linear mapping to be orthogonal further improves the result. These pioneering works required some parallel data to perform the alignment. Later on, [Smith et al.2017, Artetxe et al.2017, Artetxe et al.2018a] reduced the need of supervision by exploiting common words or digits in different languages, and more recently, unsupervised methods that rely soly on monolingual data have become quite popular [Gouws et al.2015, Zhang et al.2017b, Zhang et al.2017a, Lample et al.2018, Artetxe et al.2018b, Dou et al.2018, Hoshen and Wolf2018, Grave et al.2019]; see [Artetxe et al.2018c]

for application in neural machine translation and

[Søgaard et al.2018] for some limitations.

Encouraged by the success on bilingual alignment, the more ambitious task that aims at simultaneously and unsupervisedly aligning multiple languages has drawn a lot of attention recently. A naive approach that performs all pairwise bilingual alignment separately would not work well, since it fails to exploit all language information, especially when there are low resource ones. A second approach is to align all languages to a pivot language, such as English [Smith et al.2017], allowing us to exploit recent progresses on bilingual alignment while still using information from all languages. More recently, [Chen and Cardie2018, Taitelbaum et al.2019b, Taitelbaum et al.2019a, Alaux et al.2019, Wada et al.2019] proposed to map all languages into the same language space and train all language pairs simultaneously. Please refer to the related work section for more details.

In this work, we first show that the existing work on unsupervised multilingual alignment (such as [Alaux et al.2019]) amounts to simultaneously learning an arithmetic “mean” language from all languages and aligning all languages to the common mean language, instead of using a rather arbitrarily pre-determined input language (such as English). Then, we argue for using the (learned) Wasserstein barycenter as the pivot language as opposed to the previous arithmetic barycenter, which, unlike the Wasserstein barycenter, fails to preserve distributional properties in word embeddings. Our approach exploits available information from all languages to enforce coherence among language spaces by enabling accurate compositions between language mappings. We conduct extensive experiments on standard publicly available benchmark datasets and demonstrate competitive performance against current state-of-the-art alternatives111Preliminary results appeared in first author thesis [Lian, Xin2020].

2 Multilingual Lexicon Alignment

In this section we set up the notations and define our main problem: the multilingual lexicon alignment problem.

Given languages , each represented by a vocabulary consisting of respective words. Following MikolovLS13, we assume a monolingual word embedding for each language has been trained independently on its own data. We are interested in finding all pairwise mappings that translate a word in language to a corresponding word in language . In the following, for the ease of notation, we assume w.l.o.g. that and

. Note that we do not have access to any parallel data, i.e., we are in the much more challenging unsupervised learning regime.

Our work is largely inspired by that of AlauxGCJ19, which we review below first. Along the way we point out some crucial observations that motivated our further development. AlauxGCJ19 employ the following joint alignment approach that minimizes the total sum of mis-alignment costs between every pair of languages:

(1)

where is a orthogonal matrix and is an permutation matrix222AlauxGCJ19 also introduced weights to encode the relative importance of the language pair .. Since is orthogonal, this approach ensures transitivity among word embeddings: maps the -th word embedding space into a common space , and conversely maps back to . Thus, maps to , and if we transit through an intermediate word embedding space , we still have the desired transitive property .

The permutation matrix serves as an “inferred” correspondence between words in language and language . Naturally, we would again expect some form of transitivity in these pairwise correspondences, i.e., , which, however, is not enforced in (1). A simple way to fix this is to decouple into the product , in the same way as how we dealt with . This leads to the following variant:

(2)
(3)
(4)

where Eq. 3

follows from the definition of variance and

in Eq. 4 admits the closed-form solution:

(5)

Thus, had we known the arithmetic “mean” language beforehand, the joint alignment approach of AlauxGCJ19 would reduce to a separate alignment of each language to the “mean” language that serves as the pivot. An efficient optimization strategy would then consist of alternating between separate alignment (i.e., computing and ) and computing the pivot language (i.e., (5)).

We now point out two problems in the above formulation. First, a permutation assignment is a 1-1 correspondence that completely ignores polysemy in natural languages, that is, a word in language can correspond to multiple words in language . To address this, we propose to relax the permutation into a coupling matrix that allows splitting a word into different words. Second, the pivot language in (5), being a simple arithmetic average, may be statistically very different from any of the given languages, see Figure 1 and below. Besides, intuitively it is perhaps more reasonable to allow the pivot language to have a larger dictionary so that it can capture all linguistic regularities in all languages. To address this, we propose to use the Wasserstein barycenter as the pivot language.

The advantage of using Wasserstein barycenter instead of the arithmetic average is that the Wasserstein metric gives a natural geometry for probability measures supported on a geometric space. In Figure

1, we demonstrate the difference between Wasserstein Barycenter and arithmetic average of two input distributions. It is geometrically clear that the Wasserstein barycenter is a better “average” of input distributions.

Figure 1: Comparing the Wasserstein barycenter and arithmetic mean (bottom panel) for two input distributions (top panel).

3 Our Approach

We take a probabilistic approach, treating each language

as a probability distribution over its word embeddings:

(6)

where is the probability of occurrence of the -th word in language (often approximated by the relative frequency of word in its training documents). We project the word embeddings into a common space through the orthogonal matrix . Taking a word from each language , we associate a cost for bundling these words in our (joint) translation. To allow polysemy, we find a joint probability distribution with fixed marginals so that the expected cost

(7)

is minimized. If we fix , then the above problem is known as the multi-marginal optimal transport [Gangbo and Świȩch1998].

To simplify the computation, we take the pairwise approach of AlauxGCJ19, where we set the joint cost as the total sum of all pairwise costs:

(8)

Interestingly, with this choice, we can significantly simplify the numerical computation of the multi-marginal optimal transport.

We recall the definition of Wasserstein barycenter of given probability distributions :

(9)

where are the weights, and the (squared) Wasserstein distance is given as:

(10)

The notation denotes all joint probability distribution (i.e. coupling) with (fixed) marginal distributions and . As proven by agueh2011barycenter, with the pairwise distance (8), the multi-marginal problem in (7) and the barycenter problem in (9) are formally equivalent. Hence, from now on we will focus on the latter since efficient computational algorithms for it exist. We use the push-forward notation to denote the distribution of when follows the distribution . Thus, we can write our approach succinctly as:

(11)

where the barycenter serves as the pivot language in some common word embedding space. Unlike the arithmetic average in (5), the Wasserstein barycenter can have a much larger support (dictionary size) than the given language distributions.

We can again apply the alternating minimization strategy to solving (11): fixing all orthogonal matrices , we find the Wasserstein barycenter using an existing algorithm of [Cuturi and Doucet2014] or [Claici et al.2018]; fixing the Wasserstein barycenter , we solve each orthogonal matrix separately:

(12)

For fixed coupling , where is the dictionary size for the barycenter , the integral can be simplified as:

(13)

Thus, using the well-known theorem of Schonemann1966, is given by the closed-form solution , where

is the singular value decomposition.

Figure 2: Node denotes the barycenter of input languages. On the edges connecting each node, is the optimal transport plan from language to the barycenter .
Input: Language distribution
Output: Translation for and
for  do
      
while not converged do
       for  do
              ;
      
return
Algorithm 1 Barycenter Alignment

4 Experiments

We evaluate our algorithm on two standard publicly available datasets: MUSE [Lample et al.2018] and XLING [Glavaš et al.2019]. The MUSE benchmark is a high-quality dictionary containing up to 100k pairs of words and has now become a standard benchmark for cross-lingual alignment tasks [Lample et al.2018]. On this dataset, we conducted an experiment with 6 European languages: English, French, Spanish, Italian, Portuguese, and German. The MUSE dataset contains a direct translation for any pair of languages in this set. We also conducted an experiment with the XLING dataset with a more diverse set of languages: Croatian (HR), English (EN), Finnish (FI), French (FR), German (DE), Italian (IT), Russian (RU), and Turkish (TR). In this set of languages, we have languages coming from three different Indo-European branches, as well as two non-Indo-European languages (FI from Uralic and TR from Turkic family) [Glavaš et al.2019].

Implementation details.

To speed up the computation, we took a similar approach as AlauxGCJ19 and initialized space alignment matrices with the Gromov-Wasserstein approach [Alvarez-Melis and Jaakkola2018] applied to the first 5k vectors (AlauxGCJ19 used the first 2k vectors) and with regularization parameter of

. After the initialization, we use the space alignment matrices to map all languages into the language space of the first language. Multiplying all language embedding vectors with corresponding space alignment matrix, we realign all languages into a common language space. In the common space, we compute the Wasserstein barycenter of all projected language distributions. The support locations for the barycenter are initialized with random samples from a standard normal distribution.

The next step is to compute the optimal transport plans from the barycenter distribution to all language distributions. After obtaining optimal transport plans from the barycenter to every language , we can imply translations from to from the coupling (See Figure 2). The coupling is not necessarily a permutation matrix, and indicates the probability with which a word corresponds to another.

Method and code for computing accuracies of bilingual translation pairs are borrowed from AlvarezMelisJaakkola18.

Baselines

We compare the results of our method on MUSE with the following methods: 1) Procrustes Matching with RSLS as similarity function to imply translation pairs [Lample et al.2018]; 2) the state-of-the-art bilingual alignment method, Gromov-Wasserstein alignment (GW) [Alvarez-Melis and Jaakkola2018]; 3) the state-of-the-art multilingual alignment method (UMH) [Alaux et al.2019]; 4) bilingual alignment with multilingual auxiliary information (MPPA) [Taitelbaum et al.2019b]; and 5) unsupervised multilingual word embeddings trained with multilingual adversarial training [Chen and Cardie2018].

We compare the results of our method on XLING dataset with Ranking-Based Optimization (RCSLS) [Joulin et al.2018], solution to the Procrustes Problem (PROC) [Artetxe et al.2018b, Lample et al.2018, Glavaš et al.2019], Gromov-Wasserstein alignment (GW) [Alvarez-Melis and Jaakkola2018], and VECMAP [Artetxe et al.2018b]. RCLS and PROC are supervised methods, while GW and VECMAP are both unsupervised methods.

The translation accuracies for Gromov-Wasserstein are computed using the source code released by [Alvarez-Melis and Jaakkola2018]. For the multilingual alignment method (UMH) [Alaux et al.2019], and the two multilingual adversarial methods [Chen and Cardie2018],  [Taitelbaum et al.2019b], we directly compare our accuracies to previous methods as reported in  [Glavaš et al.2019].

Results

Table 2 depicts precision@1 results for all bilingual tasks on the MUSE benchmark [Lample et al.2018]. For most language pairs, our method Barycenter Alignment (BA) outperforms all current unsupervised methods. Our barycenter approach infers a “potential universal language” from input languages. Transiting through that universal language, we infer translation for all pairs of languages. From the experimental results in Table 2, we can see that our approach is clearly at an advantage and it benefits from using the information of all languages.

Table 3 shows mean average precision (MAP) for 10 bilingual tasks on the XLING dataset [Glavaš et al.2019].

Our method is capable of incorporating both the semantic and syntactic information of one word. For example, the top ten predicted English translations for the German word München, are “Cambridge, Oxford, Munich, London, Birmingham, Bristol, Edinburgh, Dublin, Hampshire, Baltimore”. In this case, we hit the English translation Munich. What’s more important in this example is that all predicted English words are the name of some city. Therefore, our method is capable of implying München is a city name. Another example to illustrate is the German word sollte, which means “should” in English. The top ten predicted translations are “would, could, will, supposed, should, might, meant, needs, expected, able”. The top five words predicted for sollte are syntactically correct - would, could, will, should, might are all modal verbs. In Table 1, we show several German to English translations and compare the results to Gromov-Wasserstein direct bilingual alignment.

Ablation Study

In this section, we show the impact of some of our design choices and hyperparameters. There are a few parameters to choose during the barycenter computation.

One of the parameters is the number of support locations. In theory, the optimal barycenter distribution could have as many support locations as the sum of the total number of support locations for all input distributions. In Figure 3, we show the impact on translation performance when we have a different number of support locations. Let be the number of words we have in language . We picked the three most representative cases: the average number of words , twice the average number of words , and the total number of words . As we increase the number of support locations for the barycenter distribution, we can see in Figure 3 that the performance for language translation improves. However, when we increase the number of support locations for the barycenter, the algorithm becomes costly. Therefore, in an effort to balance accuracy and computational complexity, we decided to use 10000 support locations (twice the average number of words).

We also conducted a set of experiments to determine whether the inclusion of distant languages increases bilingual translation accuracy. Excluding two non-Indo-European languages Finnish and Turkish, we calculated the barycenter of Croatian (HR), English (EN), French (FR), German (DE), and Italian (IT). Figure 4 contains results for common bilingual pairs. Orange and red bars show the bilingual translation accuracy when translating through the barycenter for all languages including Finnish and Turkish, whereas blue and green indicate the accuracy of translations that use the barycenter of the five Indo-European languages.

Figure 3: Accuracies for language pairs using different numbers of support locations for the barycenter. In our experimental setup, we have 5000 words in each language.
Figure 4: This graph shows the accuracy of bilingual translation pairs. The red and orange bars indicate translation accuracy using the barycenter of all languages (HR, EN, FI, FR, DE, RU, IT, TR), while the blue and green bars correspond to the barycenter of (HR, EN, FR, DE, IT, RU).
German Word English Translation GW Prediction BA Prediction
münchen munich london; dublin; oxford; birmingham; wellington cambridge; oxford; munich; london; birmingham
glasgow; edinburgh; cambridge; toronto; hamilton bristol; edinburgh; dublin; hampshire; baltimore
sollte should would; could; might; will; needs; would; could; will; supposed; should
supposed; put; willing; wanted; meant; might; meant; needs; expected; able
lassen let make; to; able; allow; find let; make; able; allow; to
seek; prove; encourage; identify; try continue; choose; give; encourage; decide
rahmen frame; framework joint; programs; aimed; panel; exercise programme; program; part; programmes; conducted
sponsored; conducted; program; initiative; launched framework; programme; part; joint; funded
dieser this another;whose;the; which;a the;of;whose;which; that
itself; having; that; of; thus another; this; one; latter;its
Table 1: German-to-English translation prediction comparing results by 1) using GW alignment to imply direct bilingual mapping and 2) using Barycenter Alignment method described in Algorithm 1. We show top-10 translations of both methods.
it-es it-fr it-pt it-en it-de es-it es-fr es-pt es-en es-de
GW 92.63 91.78 89.47 80.38 74.03 89.35 91.78 92.82 81.52 75.03
GW - - - 75.2 - - - - 80.4 -
PA 87.3 87.1 81.0 76.9 67.5 83.5 85.8 87.3 82.9 68.3
MAT+MPPA 87.5 87.7 81.2 77.7 67.1 83.7 85.9 86.8 83.5 66.5
MAT+MPSR 88.2 88.1 82.3 77.4 69.5 84.5 86.9 87.8 83.7 69.0
UMH 87.0 86.7 80.4 79.9 67.5 83.3 85.1 86.3 85.3 68.7
BA 92.32 92.54 90.14 81.84 75.65 89.38 92.19 92.85 83.5 78.25
fr-it fr-es fr-pt fr-en fr-de pt-it pt-es pt-fr pt-en pt-de
GW 88.0 90.3 87.44 82.2 74.18 90.62 96.19 89.9 81.14 74.83
GW - - - 82.1 - - - - - -
PA 83.2 82.6 78.1 82.4 69.5 81.1 91.5 84.3 80.3 63.7
MAT+MPPA 83.1 83.6 78.7 82.2 69.0 82.6 92.2 84.6 80.2 63.7
MAT+MPSR 83.5 83.9 79.3 81.8 71.2 82.6 92.7 86.3 79.9 65.7
UMH 82.5 82.7 77.5 83.1 69.8 81.1 91.7 83.6 82.1 64.4
BA 88.38 90.77 88.22 83.23 76.63 91.08 96.04 91.04 82.91 76.99
en-it en-es en-fr en-pt en-de de-it de-es de-fr de-pt de-en Average
GW 80.84 82.35 81.67 83.03 71.73 75.41 72.18 77.14 74.38 72.85 82.84
GW 78.9 81.7 81.3 - 71.9 - - - - 72.8 78.04
PA 77.3 81.4 81.1 79.9 73.5 69.5 67.7 73.3 59.1 72.4 77.98
MAT+MPPA 78.5 82.2 82.7 81.3 74.5 70.1 68.0 75.2 61.1 72.9 78.47
MAT+MPSR 78.8 82.5 82.4 81.5 74.8 72.0 69.6 76.7 63.2 72.9 79.29
UMH 78.9 82.5 82.7 82.0 75.1 68.7 67.2 73.5 59.0 75.5 78.46
BA 81.45 84.26 82.94 84.65 74.08 78.09 75.93 78.93 77.18 75.85 84.24
Table 2: Pairs of languages in multilingual alignment problem results for English, German, French, Spanish, Italian, and Portuguese. All reported results are precision@1 percentage. The method achieving the highest precision for each bilingual pair is highlighted in bold. Methods we are comparing to in the table are: Procrustes Matching with CSLS metric to infer translation pairs (PA)  [Lample et al.2018]; Gromov-Wasserstein alignment (GW) [Alvarez-Melis and Jaakkola2018] (reproduced by us using their source code); GW refers to the results reported by AlvarezMelisJaakkola18 in the paper; bilingual alignment with multilingual auxiliary information (MPPA) [Taitelbaum et al.2019b]; Multilingual pseudo-supervised refinement method [Chen and Cardie2018]; multilingual alignment method (UMH) [Alaux et al.2019]
en-de it-fr hr-ru en-hr de-fi tr-fr ru-it fi-hr tr-hr tr-ru
PROC (1k) 0.458 0.615 0.269 0.225 0.264 0.215 0.360 0.187 0.148 0.168
PROC (5k) 0.544 0.669 0.372 0.336 0.359 0.338 0.474 0.294 0.259 0.290
PROC-B 0.521 0.665 0.348 0.296 0.354 0.305 0.466 0.263 0.210 0.230
RCSLS (1k) 0.501 0.637 0.291 0.267 0.288 0.247 0.383 0.214 0.170 0.191
RCSLS (5k) 0.580 0.682 0.404 0.375 0.395 0.375 0.491 0.321 0.285 0.324
VECMAP 0.521 0.667 0.376 0.268 0.302 0.341 0.463 0.280 0.223 0.200
GW 0.667 0.751 0.683 0.123 0.454 0.485 0.508 0.634 0.482 0.295
BA 0.683 0.799 0.667 0.646 0.508 0.513 0.512 0.601 0.481 0.355
Table 3: Mean average precision (MAP) accuracies of several current methods on XLING dataset.

5 Related Work

We briefly describe related work on supervised and unsupervised techniques for bilingual and multilingual alignment.

5.1 Supervised bilingual alignment

MikolovLS13 formulated the problem of aligning word embeddings as a quadratic optimization problem to find an explicit linear map between the word embeddings and of two languages.

(14)

This setting is supervised since the assignment matrix that maps words of one language to another is known. Later, [Xing et al.2015] showed that the results can be improved by restricting the linear mapping to be orthogonal. This corresponds to the Orthogonal Procrustes problem [Schönemann1966]

. We can evaluate the similarity between words using several similarity functions including Euclidean norm, cosine similarity, inverted softmax 

[Smith et al.2017], and Cross-Domain Similarity Local Scaling (CSLS) [Lample et al.2018, Joulin et al.2018].

5.2 Unsupervised bilingual alignment

In the unsupervised setting, the assignment matrix between words is unknown, and we resort to the joint optimization:

(15)

As a result, the optimization problem becomes non-convex and therefore more challenging. The problem can be relaxed into a (convex) semidefinite program [Maron et al.2016]. This method provides high accuracy at the expense of high computation complexity. Therefore, it is not suitable for large scale problems. Another way to solve (15) is to use Block Coordinate Relaxation, where we iteratively optimize each variable with other variables fixed. When is fixed, optimizing can be done with the Hungarian algorithm in time (which is prohibitive since is the number of words). cuturi2014fast developed an efficient approximation (complexity ) achieved by adding a negative entropy regularizer that was employed by ZhangLLS17b. A convex relaxation of the quadratic assignment problem was also developed by GraveJB19. Observing that both and preserve the intra-language distances, AlvarezMelisJaakkola18 cast the unsupervised bilingual alignment problem as a Gromov-Wasserstein optimal transport problem, and give a solution with minimum hyper-parameter to tune.

5.3 Multilingual Alignment

In multilingual alignment, we seek to align multiple languages together while taking advantage of inter-dependencies to ensure consistency among them. A common approach consists of mapping each language to a common space

by minimizing some loss function

:

(16)

The common space may be a pivot language such as English [Smith et al.2017, Lample et al.2018, Joulin et al.2018]. nakashole-flauger-2017-knowledge and AlauxGCJ19 showed that constraining coherent word alignments between triplets of nearby languages improves the quality of induced bilingual lexicons. ChenCardie18 extended the work of [Lample et al.2018] to the multilingual case using adversarial algorithms. TaitelbaumCG19b extended the Procrustes Matching to the multi-Pairwise case [Taitelbaum et al.2019b], and also designed an improved representation of the source word using auxiliary languages [Taitelbaum et al.2019a].

6 Conclusion

In this paper, we discussed previous attempts to solve the multilingual alignment problem, compared similarity between the approaches and pointed out the problem with existing formulation. Later we proposed a new method using the Wasserstein barycenter as a pivot for the multilingual alignment problem. At the core of our algorithm lies a new inference method based on optimal transport plan to predict the similarity between words. Our barycenter can be interpreted as a virtual universal language, capturing information from all languages in our data set. The algorithm we proposed improves the accuracy of pairwise translations compared to the current state-of-the-art method.

References

7 Appendix

7.1 Barycenter Convergence

Each iteration of our barycenter algorithm optimizes the barycenter weights and then the support locations. In this section, we investigate the speed of convergence of our approach. In figure 5, we plot the translation accuracy for all language pairs as a function of the number of iterations. As we can see, the accuracy stabilizes after roughly iterations.

Figure 5: Translation accuracies for language pairs as a function of the number of iterations. The barycenter stabilizes after the -th iteration.

7.2 Hierarchical Approach

Training a joint barycenter for all languages captures shared information across all languages. We hypothesized that distant languages might potentially impair performance for some language pairs. To leverage existing knowledge of similarities between languages, we constructed a language tree whose topology was consistent with the widely agreed phylogeny of Indo-European languages (see e.g. [Gray and Atkinson2003]). For each non-leaf node, we set it the barycenter for all its children. We traverse the language tree in depth-first order and store the mappings corresponding to each edge. The translation between any two languages can be implied by traversing through the tree structure and multiplying the mappings corresponding to each edge.

Table 4 shows the results for the hierarchical barycenter. We see that the hierarchical approach yields slightly better performance for some language pairs, particularly for closely related languages such as Spanish and Portuguese or Italian and Spanish. For most language pairs, it does not improve over the weighted barycenter. More details about the hierarchical approach are available in first author’s thesis [Lian, Xin2020].

GW benchmark unweighted hierarchical weighted
P@1 P@10 P@1 P@10 P@1 P@10 P@1 P@10
it-es 92.63 98.05 91.52 97.95 92.49 98.11 92.32 98.01
it-fr 91.78 98.11 91.27 97.89 92.61 98.14 92.54 98.14
it-pt 89.47 97.35 88.22 97.25 89.89 97.87 90.14 97.84
it-en 80.38 93.3 79.23 93.18 79.54 93.21 81.84 93.77
it-de 74.03 93.66 74.41 92.96 73.06 92.26 75.65 93.82
es-it 89.35 97.3 88.8 97.05 89.73 97.5 89.38 97.43
es-fr 91.78 98.21 91.34 98.03 91.74 98.29 92.19 98.33
es-pt 92.82 98.32 91.83 98.18 92.65 98.35 92.85 98.35
es-en 81.52 94.79 82.43 94.63 81.63 94.27 83.5 95.48
es-de 75.03 93.98 76.47 93.73 74.86 93.73 78.25 94.74
fr-it 88.0 97.5 87.55 97.19 88.35 97.64 88.38 97.71
fr-es 90.3 97.97 90.18 97.68 90.66 98.04 90.77 98.04
fr-pt 87.44 96.89 86.7 96.79 88.35 97.11 88.22 97.08
fr-en 82.2 94.19 81.26 94.25 80.89 94.13 83.23 94.42
fr-de 74.18 92.94 74.07 92.73 74.44 92.68 76.63 93.41
pt-it 90.62 97.61 89.36 97.75 90.59 98.17 91.08 97.96
pt-es 96.19 99.29 95.36 99.08 96.04 99.23 96.04 99.32
pt-fr 89.9 97.57 90.1 97.43 90.67 97.74 91.04 97.87
pt-en 81.14 94.17 81.42 94.14 81.42 93.86 82.91 94.64
pt-de 74.83 93.76 75.94 93.21 74.45 93.1 76.99 94.32
en-it 80.84 93.97 79.88 93.93 80.25 93.76 81.45 94.58
en-es 82.35 94.67 83.05 94.79 81.62 94.82 84.26 95.28
en-fr 81.67 94.24 81.86 94.33 81.42 93.99 82.94 94.67
en-pt 83.03 94.45 82.72 94.64 82.25 94.79 84.65 95.29
en-de 71.73 90.48 72.92 90.76 71.88 90.42 74.08 91.46
de-it 75.41 94.3 76.4 93.87 75.19 93.65 78.09 94.52
de-es 72.18 92.64 74.21 92.6 73.58 92.48 75.93 93.83
de-fr 77.14 93.29 77.93 93.61 77.14 93.51 78.93 93.77
de-pt 74.38 93.71 74.99 93.54 74.22 93.81 77.18 94.14
de-en 72.85 91.06 74.36 91.21 72.17 90.81 75.85 91.98
average 82.84 95.26 82.86 95.15 82.79 95.18 84.24 95.67
Table 4: Accuracy results for translation pairs between all pairs of languages for all evaluated methods. The column GW-benchmark contains results from Gromov-Wasserstein direct bilingual alignment. Unweighted is the barycenter approach without optimizing on support location weights. Hierarchical contains results from traversing through edges and infer translation mapping through hierarchical barycenters. The weighted column is what Algorithm 1 returns, optimizing both on support locations and weights on the support.