Massively Multilingual Word Embeddings

02/05/2016 ∙ by Waleed Ammar, et al. ∙ Carnegie Mellon University University of Washington 0

We introduce new methods for estimating and evaluating embeddings of words in more than fifty languages in a single shared embedding space. Our estimation methods, multiCluster and multiCCA, use dictionaries and monolingual data; they do not require parallel data. Our new evaluation method, multiQVEC-CCA, is shown to correlate better than previous ones with two downstream tasks (text categorization and parsing). We also describe a web portal for evaluation that will facilitate further research in this area, along with open-source releases of all our methods.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Vector-space representations of words are widely used in statistical models of natural language. In addition to improving the performance on standard monolingual NLP tasks, shared representation of words across languages offers intriguing possibilities [klementiev:12]

. For example, in machine translation, translating a word never seen in parallel data may be overcome by seeking its vector-space neighbors, provided the embeddings are learned from both plentiful monolingual corpora and more limited parallel data. A second opportunity comes from transfer learning, in which models trained in one language can be deployed in other languages. While previous work has used hand-engineered features that are cross-linguistically stable as the basis model transfer

[zeman:08, mcdonald:11, tsvetkov14metaphor], automatically learned embeddings offer the promise of better generalization at lower cost [klementiev:12, hermann:14, guo:16]. We therefore conjecture that developing estimation methods for massively multilingual word embeddings (i.e., embeddings for words in a large number of languages) will play an important role in the future of multilingual NLP.

This paper builds on previous work in multilingual embeddings and makes the following contributions:

  • We propose two dictionary-based methods—multiCluster and multiCCA—for estimating multilingual embeddings which only require monolingual data and pairwise parallel dictionaries, and use them to train embeddings in 59 languages for which these resources are available (2

    ). Parallel corpora are not required but can be used when available. We show that the proposed methods work well in some settings and evaluation metrics.

  • We adapt qvec [tsvetkov:15]111A method for evaluating monolingual word embeddings. to evaluating multilingual embeddings (multiqvec). We also develop a new evaluation method multiqvec-cca which addresses a theoretical shortcoming of multiqvec (3). Compared to other intrinsic metrics used in the literature, we show that both multiqvec and multiqvec-cca achieve better correlations with extrinsic tasks.

  • We develop an easy-to-use web portal222http:// for evaluating arbitrary multilingual embeddings using a suite of intrinsic and extrinsic metrics (4). Together with the provided benchmarks, the evaluation portal will substantially facilitate future research in this area.

2 Estimating Multilingual Embeddings

Let be a set of languages, and let be the set of surface forms (word types) in . Let . Our goal is to estimate a partial embedding function (allowing a surface form that appears in two languages to have different vectors in each). We would like to estimate this function such that: (i) semantically similar words in the same language are nearby, (ii) translationally equivalent words in different languages are nearby, and (iii) the domain of the function covers as many words in as possible.

We use distributional similarity in a monolingual corpus to model semantic similarity between words in the same language. For cross-lingual similarity, either a parallel corpus or a bilingual dictionary can be used. Our methods focus on the latter, in some cases extracting from a parallel corpus.333To do this, we align the corpus using fast_align [dyer:13] in both directions. The estimated parameters of the word translation distributions are used to select pairs: , where the threshold trades off dictionary recall and precision. We fixed early on based on manual inspection of the resulting dictionaries.

Most previous work on multilingual embeddings only considered the bilingual case, . We focus on estimating multilingual embeddings for and describe two novel dictionary-based methods (multiCluster and multiCCA). We then describe our baselines: a variant of coulmance:15 and guo:16 (henceforth referred to as multiSkip),444We developed multiSkip independently of coulmance:15 and guo:16. One important distinction is that multiSkip is only trained on parallel corpora, while coulmance:15 and guo:16 also use monolingual corpora. and the translation-invariance matrix factorization method [gardner:15].

2.1 MultiCluster

In this approach, we decompose the problem into two simpler subproblems: , where deterministically maps words to multilingual clusters , and assigns a vector to each cluster. We use a bilingual dictionary to find clusters of translationally equivalent words, then use distributional similarities of the clusters in monolingual corpora from all languages in to estimate an embedding for each cluster. By forcing words from different languages in a cluster to share the same embedding, we create anchor points in the vector space to bridge languages.

More specifically, we define the clusters as the connected components in a graph where nodes are (language, surface form) pairs and edges correspond to translation entries in . We assign arbitrary IDs to the clusters and replace each word token in each monolingual corpus with the corresponding cluster ID, and concatenate all modified corpora. The resulting corpus consists of multilingual cluster ID sequences. We can then apply any monolingual embedding estimator; here, we use the skipgram model from mikolov:13.

2.2 MultiCCA

Our proposed method (multiCCA) extends the bilingual embeddings of faruqui:14. First, they use monolingual corpora to train monolingual embeddings for each language independently ( and ), capturing semantic similarity within each language separately. Then, using a bilingual dictionary , they use canonical correlation analysis (CCA) to estimate linear projections from the ranges of the monolingual embeddings and , yielding a bilingual embedding . The linear projections are defined by and ; they are selected to maximize the correlation between and where . The bilingual embedding is then defined as (and likewise for ).

In this work, we use a simple extension (in hindsight) to construct multilingual embeddings for more languages. We let the vector space of the initial (monolingual) English embeddings serve as the multilingual vector space (since English typically offers the largest corpora and wide availability of bilingual dictionaries). We then estimate projections from the monolingual embeddings of the other languages into the English space.

We start by estimating, for each , the two projection matrices: and ; these are guaranteed to be non-singular. We then define the multilingual embedding as for , and for .

2.3 MultiSkip

luong:15 proposed a method for estimating bilingual embeddings which only makes use of parallel data; it extends the skipgram model of mikolov:13. The skipgram model defines a distribution over words that occur in a context window (of size ) of a word :

In practice, this distribution can be estimated using a noise contrastive estimation approximation

[gutmann:12] while maximizing the log-likelihood:

where are the indices of words in the monolingual corpus .

To establish a bilingual embedding, with a parallel corpus of source language and target language , luong:15 estimate conditional models of words in both source and target positions. The source positions are selected as sentential contexts (similar to monolingual skipgram), and the bilingual contexts come from aligned words. The bilingual objective is to maximize:

where and are the indeces of the source and target tokens in the parallel corpus respectively, and are the positions of words that align to and in the other language. It is easy to see how this method can be extended for more than two languages by summing up the bilingual objectives for all available parallel corpora.

2.4 Translation-invariance

gardner:15 proposed that multilingual embeddings should be translation invariant. Consider a matrix which summarizes the pointwise mutual information statistics between pairs of words in monolingual corpora, and let be a low-rank decomposition of where . Now, consider another matrix which summarizes bilingual alignment frequencies in a parallel corpus. gardner:15 solves for a low-rank decomposition which both approximates as well as its transformations , and by defining the following objective:

The multilingual embeddings are then taken to be the rows of the matrix .

3 Evaluating Multilingual Embeddings

One of our contributions is to streamline the evaluation of multilingual embeddings. In addition to assessing goals (i–iii) stated in 2, a good evaluation metric should also (iv) show good correlation with performance in downstream applications and (v) be computationally efficient.

It is easy to evaluate the coverage (iii) by counting the number of words covered by an embedding function in a closed vocabulary. Intrinsic evaluation metrics are generally designed to be computationally efficient (v) but may or may not meet the goals (i, ii, iv). Although intrinsic evaluations will never be perfect, a standard set of evaluation metrics will help drive research. By design, standard (monolingual) word similarity tasks meet (i) while cross-lingual word similarity tasks and the word translation tasks meet (ii). We propose another evaluation method (multiqvec-cca), designed to simultaneously assess goals (i, ii). Multiqvec-cca extends qvec [tsvetkov:15], a recently proposed monolingual evaluation method, addressing fundamental flaws and extending it to multiple languages. To assess the degree to which these evaluation metrics meet (iv), in 5 we perform a correlation analysis looking at which intrinsic metrics are best correlated with downstream task performance—i.e., we evaluate the evaluation metrics.

3.1 Word similarity

Word similarity datasets such as WordSim-353 [agirre:09] and MEN [bruni:14]

provide human judgments of semantic similarity. By ranking words by cosine similarity and by their empirical similarity judgments, a ranking correlation can be computed that assesses how well the estimated vectors capture human intuitions about semantic relatedness.

Some previous work on bilingual and multilingual embeddings focuses on monolingual word similarity to evaluate embeddings (e.g., Faruqui and Dyer, 2014). This approach is limited because it cannot measure the degree to which embeddings from different languages are similar (ii). For this paper, we report results on an English word similarity task, the Stanford RW dataset [luong:13], as well as a combination of several cross-lingual word similarity datasets [camacho-collados:15].

3.2 Word translation

This task directly assesses the degree to which translationally equivalent words in different languages are nearby in the embedding space. The evaluation data consists of word pairs which are known to be translationally equivalent. The score for one word pair both of which are covered by an embedding is if where is the set of words of language in the evaluation dataset, and cosine is the cosine similarity function. Otherwise, the score for this word pair is 0. The overall score is the average score for all word pairs covered by the embedding function. This is a variant of the method used by mikolov:13c to evaluate bilingual embeddings.

3.3 Correlation-based evaluation

We introduce qvec-cca—an intrinsic evaluation measure of the quality of word embeddings. Our method is an improvement of qvec

—a monolingual evaluation based on alignment of embeddings to a matrix of features extracted from a linguistic resource

[tsvetkov:15]. We review qvec, and then describe qvec-cca.


The main idea behind qvec is to quantify the linguistic content of word embeddings by maximizing the correlation with a manually-annotated linguistic resource. Let the number of common words in the vocabulary of the word embeddings and the linguistic resource be . To quantify the semantic content of embeddings, a semantic linguistic matrix is constructed from a semantic database, with a column vector for each word. Each word vector is a distribution of the word over linguistic properties, based on annotations of the word in the database. Let be embedding matrix with every row as a dimension vector . denotes the dimensionality of word embeddings. Then, and are aligned to maximize the cumulative correlation between the aligned dimensions of the two matrices. Specifically, let be a matrix of alignments such that iff is aligned to , otherwise . If is the Pearson’s correlation between vectors and , then qvec is defined as:

The constraint , warrants that one distributional dimension is aligned to at most one linguistic dimension.

qvec has been shown to correlate strongly with downstream semantic tasks [tsvetkov:15]

. However, it suffers from two major weaknesses. First, it is not invariant to linear transformations of the embeddings’ basis, whereas the bases in word embeddings are generally arbitrary

[szegedy2013intriguing]. Second, a sum of correlations produces an unnormalized score: the more dimensions in the embedding matrix the higher the score. This precludes comparison of models of different dimensionality. qvec-cca simultaneously addresses both problems.


To measure correlation between the embedding matrix and the linguistic matrix , instead of cumulative dimension-wise correlation we employ CCA. CCA finds two sets of basis vectors, one for and the other for , such that the correlations between the projections of the matrices onto these basis vectors are maximized. Formally, CCA finds a pair of basis vectors and such that


Thus, qvec-cca ensures invariance to the matrices’ bases’ rotation, and since it is a single correlation, it produces a score in . Both qvec and qvec-cca rely on a matrix of linguistic properties constructed from a manually crafted linguistic resource. We extend both methods to multilingual evaluations—multiqvec and multiqvec-cca—by constructing the linguistic matrix using supersense tag annotations for English [semcor], Danish [martinezalonsoetal2015supersenses, martinezalonsoetal2016] and Italian [montemagni2003building].

3.4 Extrinsic tasks

In order to evaluate how useful the word embeddings are for a downstream task, we use the embedding vector as a dense feature representation of each word in the input, and deliberately remove any other feature available for this word (e.g., prefixes, suffixes, part-of-speech). For each task, we train one model on the aggregate training data available for several languages, and evaluate on the aggregate evaluation data in the same set of languages. We apply this for multilingual document classification and multilingual dependency parsing.

For document classification, we follow klementiev:12 in using the RCV corpus of newswire text, and train a classifier which differentiates between four topics. While most previous work used this data only in a bilingual setup, we simultaneously train the classifier on documents in seven languages,

555Danish, German, English, Spanish, French, Italian and Swedish. and evaluate on the development/test section of those languages. For this task, we report the average classification accuracy on the test set.

For dependency parsing, we train the stack-LSTM parser of dyer:15 on a subset of the languages in the universal dependencies v1.1,666 and test on the same languages, reporting unlabeled attachment scores. We remove all part-of-speech and morphology features from the data, and prevent the model from optimizing the word embeddings used to represent each word in the corpus, thereby forcing the parser to rely completely on the provided (pretrained) embeddings as the token representation. Although omitting other features (e.g., parts of speech) hurts the performance of the parser, it emphasizes the contribution of the word embeddings being studied.

4 Evaluation Portal

In order to facilitate future research on multilingual word embeddings, we developed a web portal to enable researchers who develop new estimation methods to evaluate them using a suite of evaluation tasks. The portal serves the following purposes:

  • Download the monolingual and bilingual data we used to estimate multilingual embeddings in this paper,

  • Download standard development/test data sets for each of the evaluation metrics to help researchers working in this area report trustworthy and replicable results,777Except for the original RCV documents, which are restricted by the Reuters license and cannot be republished. All other data is available for download.

  • Upload arbitrary multilingual embeddings, scan which languages are covered by the embeddings, allow the user to pick among the compatible evaluation tasks, and receive evaluation scores for the selected tasks, and

  • Register a new evaluation data set or a new evaluation metric via the github repository which mirrors the backend of the web portal.

5 Experiments

Our experiments are designed to show two primary sets of results: (i) how well the proposed intrinsic evaluation metrics correlate with downstream tasks (5.1) and (ii) which estimation methods work best according to each metric (5.2). The data used for training and evaluation are available for download on the evaluation portal.

5.1 Correlations between intrinsic vs. extrinsic evaluation metrics

In this experiment, we consider four intrinsic evaluation metrics (cross-lingual word similarity, word translation, multiqvec and multiqvec-cca) and two extrinsic evaluation metrics (multilingual document classification and multilingual parsing).


For the cross-lingual word similarity task, we use disjoint subsets of the en-it MWS353 dataset [leviant:15] for development (308 word pairs) and testing (307 word pairs). For the word translation task, we use Wiktionary to extract a development set (647 translations) and a test set (647 translations) of translationally-equivalent word pairs in en-it, en-da and da-it. For both multiqvec and multiqvec-cca, we used disjoint subsets of the multilingual (en, da, it) supersense tag annotations described in 3 for development (12,513 types) and testing (12,512 types).

For the document classification task, we use the multilingual RCV corpus (en, it, da). For the dependency parsing task, we use the universal dependencies v1.1 [universal:v1_1] in three languages (en, da, it).


To estimate correlations between the proposed intrinsic evaluation metrics and downstream task performance, we train a total of 17 different multilingual embeddings for three languages (English, Italian and Danish). To compute the correlations, we evaluate each of the 17 embeddings (12 multiCluster embeddings, 1 multiCCA embeddings, 1 multiSkip embeddings, 2 translation-invariance embeddings) according to each of the six evaluation metrics (4 intrinsic, 2 extrinsic).888The 102 (17 6) values used to compute Pearson’s correlation coefficient are provided in the supplementary material.

() extrinsic task document dependency
() intrinsic metric classification parsing
word similarity 0.386 0.007
word translation 0.066 -0.292
multiqvec 0.635 0.444
multiqvec-cca 0.896 0.273
Table 1: Correlations between intrinsic evaluation metrics (rows) and downstream task performance (columns).


Table 1 shows Pearson’s correlation coefficients of eight (intrinsic metric, extrinsic metric) pairs. Although each of two proposed methods multiqvec and multiqvec-cca correlate better with a different extrinsic task, we establish (i) that intrinsic methods previously used in the literature (cross-lingual word similarity and word translation) correlate poorly with downstream tasks, and (ii) that the intrinsic methods proposed in this paper (multiqvec and multiqvec-cca) correlate better with both downstream tasks, compared to cross-lingual word similarity and word translation.999Although supersense annotations exist for other languages, the annotations are inconsistent across languages and may not be publicly available, which is a disadvantage of the multiqvec and multiqvec-cca metrics. Therefore, we recommend that future multilingual supersense annotation efforts use the same set of supersense tags used in other languages. If the word embeddings are primarily needed for encoding syntactic information, one could use tag dictionaries based on the universal POS tag set [petrov:12] instead of supersense tags.

Task multiCluster multiCCA
dependency parsing 48.4 [72.1] 48.8 [69.3]
doc. classification 90.3 [52.3] 91.6 [52.6]
mono. wordsim 14.9 [71.0] 43.0 [71.0]
cross. wordsim 12.8 [78.2] 66.8 [78.2]
word translation 30.0 [38.9] 83.6 [31.8]
mono. qvec 7.6 [99.6] 10.7 [99.0]
multiqvec 8.3 [86.4] 8.7 [87.0]
mono. qvec-cca 53.8 [99.6] 63.4 [99.0]
multiqvec-cca 37.4 [86.4] 42.0 [87.0]
Table 2: Results for multilingual embeddings that cover 59 languages. Each row corresponds to one of the embedding evaluation metrics we use (higher is better). Each column corresponds to one of the embedding estimation methods we consider; i.e., numbers in the same row are comparable. Numbers in square brackets are coverage percentages.

5.2 Evaluating multilingual estimation methods

We now turn to evaluating the four estimation methods described in 2. We use the proposed methods (i.e., multiCluster and multiCCA) to train multilingual embeddings in 59 languages for which bilingual translation dictionaries are available.101010The 59-language set is { bg, cs, da, de, el, en, es, fi, fr, hu, it, sv, zh, af, ca, iw, cy, ar, ga, zu, et, gl, id, ru, nl, pt, la, tr, ne, lv, lt, tg, ro, is, pl, yi, be, hy, hr, jw, ka, ht, fa, mi, bs, ja, mg, tl, ms, uz, kk, sr, mn, ko, mk, so, uk, sl, sw }. In order to compare our methods to baselines which use parallel data (i.e., multiSkip and translation-invariance), we also train multilingual embeddings in a smaller set of 12 languages for which high-quality parallel data are available.111111The 12-language set is {bg, cs, da, de, el, en, es, fi, fr, hu, it, sv}.

Training data:

We use Europarl en-xx parallel data for the set of 12 languages. We obtain en-xx bilingual dictionaries from two different sources. For the set of 12 languages, we extract the bilingual dictionaries from the Europarl parallel corpora. For the remaining 47 languages, dictionaries were formed by translating the 20k most common words in the English monolingual corpus with Google Translate, ignoring translation pairs with identical surface forms and multi-word translations.

Evaluation data:

Monolingual word similarity uses the MEN dataset in bruni:14 as a development set and Stanford’s Rare Words dataset in luong:13 as a test set. For the cross-lingual word similarity task, we aggregate the RG-65 datasets in six language pairs (fr-es, fr-de, en-fr, en-es, en-de, de-es). For the word translation task, we use Wiktionary to extract translationally-equivalent word pairs to evaluate multilingual embeddings for the set of 12 languages. Since Wiktionary-based translations do not cover all 59 languages, we use Google Translate to obtain en-xx bilingual dictionaries to evaluate the embeddings of 59 languages. For qvec and qvec-cca, we split the English supersense annotations used in tsvetkov:15 into a development set and a test set. For multiqvec and multiqvec-cca, we use supersense annotations in English, Italian and Danish. For the document classification task, we use the multilingual RCV corpus in seven languages (da, de, en, es, fr, it, sv). For the dependency parsing task, we use the universal dependencies v1.1 in twelve languages (bg, cs, da, de, el, en, es, fi, fr, hu, it, sv).


All word embeddings in the following results are 512-dimensional vectors. Methods which indirectly use skipgram (i.e., multiCCA, multiSkip, and multiCluster) are trained using 10 epochs of stochastic gradient descent, and use a context window of size 5. The translation-invariance method use a context window of size 3.

121212Training translation-invariance embeddings with larger context window sizes using the matlab implementation provided by gardner:15 is computationally challenging. We only estimate embeddings for words/clusters which occur 5 times or more in the monolingual corpora. In a postprocessing step, all vectors are normalized to unit length. MultiCluster uses a maximum cluster size of 1,000 and 10,000 for the set of 12 and 59 languages, respectively. In the English tasks (monolingual word similarity, qvec, qvec-cca), skipgram embeddings [mikolov:13] and multiCCA embeddings give identical results (since we project words in other languages to the English vector space, estimated using the skipgram model). The software used to train all embeddings as well as the trained embeddings are available for download on the evaluation portal.131313URLs to software libraries on Github are redacted to comply with the double-blind reviewing of CoNLL.

We note that intrinsic evaluation of word embeddings (e.g., word similarity) typically ignores test instances which are not covered by the embeddings being studied. When the vocabulary used in two sets of word embeddings is different, which is often the case, the intrinsic evaluation score for each set may be computed based on a different set of test instances, which may bias the results in unexpected ways. For instance, if one set of embeddings only covers frequent words while the other set also covers infrequent words, the scores of the first set may be inflated because frequent words appear in many different contexts and are therefore easier to estimate than infrequent words. To partially address this problem, we report the coverage of each set of embeddings in square brackets. When the difference in coverage is large, we repeat the evaluation using only the intersection of vocabularies covered by all embeddings being evaluated. Extrinsic evaluations are immune to this problem because the score is computed based on all test instances regardless of the coverage.

Results [59 languages].

We train the proposed dictionary-based estimation methods (multiCluster and multiCCA) for 59 languages, and evaluate the trained embeddings according to nine different metrics in Table 2. The results show that, when trained on a large number of languages, multiCCA consistently outperforms multiCluster according to all evaluation metrics. Note that most differences in coverage between multiCluster and multiCCA are relatively small.

It is worth noting that the mainstream approach of estimating one vector representation per word type (rather than word token) ignores the fact that the same word may have different semantics in different contexts. The multiCluster method exacerbates this problem by estimating one vector representation per cluster of translationally equivalent words. The added semantic ambiguity severely hurts the performance of multiCluster with 59 languages, but it is still competitive with 12 languages (see below).

Task multiCluster multiCCA multiSkip invariance
dependency parsing 61.0 [70.9] 58.7 [69.3] 57.7 [68.9] 59.8 [68.6]
document classification 92.1 [48.1] 92.1 [62.8] 90.4 [45.7] 91.1 [31.3]
monolingual word similarity 38.0 [57.5] 43.0 [71.0] 33.9 [55.4] 51.0 [23.0]
multilingual word similarity 58.1 [74.1] 66.6 [78.2] 59.5 [67.5] 58.7 [63.0]
word translation 43.7 [45.2] 35.7 [53.2] 46.7 [39.5] 63.9 [30.3]
monolingual qvec 10.3 [98.6] 10.7 [99.0] 8.4 [98.0] 8.1 [91.7]
multiqvec 9.3 [82.0] 8.7 [87.0] 8.7 [87.0] 5.3 [74.7]
monolingual qvec-cca 62.4 [98.6] 63.4 [99.0] 58.9 [98.0] 65.8 [91.7]
multiqvec-cca 43.3 [82.0] 41.5 [87.0] 36.3 [75.6] 46.2 [74.7]
Table 3: Results for multilingual embeddings that cover Bulgarian, Czech, Danish, Greek, English, Spanish, German, Finnish, French, Hungarian, Italian and Swedish. Each row corresponds to one of the embedding evaluation metrics we use (higher is better). Each column corresponds to one of the embedding estimation methods we consider; i.e., numbers in the same row are comparable. Numbers in square brackets are coverage percentages.

Results [12 languages].

We compare the proposed dictionary-based estimation methods to parallel text-based methods in Table 3. The ranking of the four estimation methods is not consistent across all evaluation metrics. This is unsurprising since each metric evaluates different traits of word embeddings, as detailed in 3. However, some patterns are worth noting in Table 3.

In five of the evaluations (including both extrinsic tasks), the best performing method is a dictionary-based one proposed in this paper. In the remaining four intrinsic methods, the best performing method is the translation-invariance method. MultiSkip ranks last in five evaluations, and never ranks first. Since our implementation of multiSkip does not make use of monolingual data, it only learns from monolingual contexts observed in parallel corpora, it misses the opportunity to learn from contexts in the much larger monolingual corpora. Trained for 12 languages, multiCluster is competitive in four evaluations (and ranks first in three).

We note that multiCCA consistently achieves better coverage than the translation-invariance method. For intrinsic measures, this confounds the performance comparison. A partial solution is to test only on word types for which all four methods have a vector; this subset is in no sense a representative sample of the vocabulary. In this comparison (provided in the supplementary material), we find a similar pattern of results, though multiCCA outperforms the translation-invariance method on the monolingual word similarity task. Also, the gap (between multiCCA and the translation-invariance method) reduces to 0.7 in monolingual qvec-cca and 2.5 in multiqvec-cca.

6 Related Work

There is a rich body of literature on bilingual embeddings, including work on machine translation [zou:13, hermann:14, cho2014learning, luong:15, luong2015addressing, inter alia],141414hermann:14 showed that the bicvm method can be extended to more than two languages, but the released software library only supports bilingual embeddings. cross-lingual dependency parsing [guo:15, guo:16], and cross-lingual document classification [klementiev:12, gouws:14, kocisky:14]

. alrfou:13 trained word embeddings for more than 100 languages, but the embeddings of each language are trained independently (i.e., embeddings of words in different languages do not share the same vector space). Word clusters are a related form of distributional representation; in clustering, cross-lingual distributional representations were proposed as well

[och:99, tackstrom2012cross]

. haghighi:08 used CCA to learn bilingual lexicons from monolingual corpora.

7 Conclusion

We proposed two dictionary-based estimation methods for multilingual word embeddings, multiCCA and multiCluster, and used them to train embeddings for 59 languages. We characterized important shortcomings of the qvec previously used to evaluate monolingual embeddings, and proposed an improved metric multiqvec-cca. Both multiqvec and multiqvec-cca obtain better correlations with downstream tasks compared to intrinsic methods previously used in the literature. Finally, in order to help future research in this area, we created a web portal for users to upload their multilingual embeddings and easily evaluate them on nine evaluation metrics, with two modes of operation (development and test) to encourage sound experimentation practices.


Waleed Ammar is supported by the Google fellowship in natural language processing. Part of this material is based upon work supported by a subcontract with Raytheon BBN Technologies Corp. under DARPA Prime Contract No. HR0011-15-C-0013. This work was supported in part by the National Science Foundation through award IIS-1526745. We thank Manaal Faruqui, Wang Ling, Kazuya Kawakami, Matt Gardner, Benjamin Wilson and the anonymous reviewers of the NW-NLP workshop for helpful comments. We are also grateful to Héctor Martínez Alonso for his help with Danish resources.