State-of-the-art unsupervised multilingual models (e.g., multilingual BERT) have been shown to generalize in a zero-shot cross-lingual setting. This generalization ability has been attributed to the use of a shared subword vocabulary and joint training across multiple languages giving rise to deep multilingual abstractions. We evaluate this hypothesis by designing an alternative approach that transfers a monolingual model to new languages at the lexical level. More concretely, we first train a transformer-based masked language model on one language, and transfer it to a new language by learning a new embedding matrix with the same masked language modeling objective -freezing parameters of all other layers. This approach does not rely on a shared vocabulary or joint training. However, we show that it is competitive with multilingual BERT on standard cross-lingual classification benchmarks and on a new Cross-lingual Question Answering Dataset (XQuAD). Our results contradict common beliefs of the basis of the generalization ability of multilingual models and suggest that deep monolingual models learn some abstractions that generalize across languages. We also release XQuAD as a more comprehensive cross-lingual benchmark, which comprises 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 translated into ten languages by professional translators.READ FULL TEXT VIEW PDF
Experiments on cross lingual transfer (en -> pt) using BERT
Multilingual pre-training methods such as multilingual BERT (mBERT, Devlin et al., 2019) have been successfully used for zero-shot cross-lingual transfer (Pires et al., 2019; Lample and Conneau, 2019). These methods work by jointly training a transformer model (Vaswani et al., 2017) to perform masked language modeling (MLM) in multiple languages, which is then fine-tuned on a downstream task using labeled data in a single language—typically English. As a result of the multilingual pre-training, the model is able to generalize to other languages, even if it has never seen labeled data in those languages.
Such a cross-lingual generalization ability is surprising, as there is no explicit cross-lingual term in the underlying training objective. In relation to this, Pires et al. (2019) hypothesized that:…having word pieces used in all languages (numbers, URLs, etc), which have to be mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close to a shared space. …mBERT’s ability to generalize cannot be attributed solely to vocabulary memorization, and that it must be learning a deeper multilingual representation.
Anonymous (2019c) echoed this sentiment, and Wu and Dredze (2019) further observed that mBERT performs better in languages, which share many subwords. As such, the current consensus of the cross-lingual generalization ability of mBERT is based on a combination of three factors: (i) shared vocabulary items that act as anchor points; (ii) joint training across multiple languages that spreads this effect; which ultimately yields (iii) deep cross-lingual representations that generalize across languages and tasks.
In this paper, we empirically test this hypothesis by designing an alternative approach that violates all of these assumptions. As illustrated in Figure 1, our method starts with a monolingual transformer trained with MLM, which we transfer to a new language by learning a new embedding matrix through MLM in the new language while freezing parameters of all other layers. This approach only learns new lexical parameters and does not rely on shared vocabulary items nor joint learning. However, we show that it is competitive with joint multilingual pre-training across standard zero-shot cross-lingual transfer benchmarks (XNLI, MLDoc, and PAWS-X).
We also experiment with a new Cross-lingual Question Answering Dataset (XQuAD), which consists of 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 (Rajpurkar et al., 2016) translated into ten languages by professional translators. Question answering as a task is a classic probe for language understanding. It has also been found to be less susceptible to annotation artifacts commonly found in other benchmarks (Kaushik and Lipton, 2018; Gururangan et al., 2018). We believe that XQuAD can serve as a more comprehensive benchmark to evaluate cross-lingual models and make this dataset publicly available at https://github.com/deepmind/XQuAD. Our results on XQuAD demonstrate that the monolingual transfer approach can be made competitive with jointly trained multilingual models by learning second language-specific transformations via adapter modules (Rebuffi et al., 2017).
Our contributions in this paper are as follows: (i) we propose a method to transfer monolingual representations to new languages in an unsupervised fashion (§2)111This is particularly useful for low-resource languages, since many pre-trained models are currently in English.; (ii) we show that neither a shared subword vocabulary nor joint multilingual training is necessary for zero-shot transfer and find that the effective vocabulary size per language is an important factor for learning multilingual models (§3 and §4); (iii) we demonstrate that monolingual models learn semantic abstractions that generalize across languages (§5); and (iv) we present a new cross-lingual question answering dataset (§4).
In this section, we propose an approach to transfer a pre-trained monolingual model in one language (for which both task supervision and a monolingual corpus are available) to a second language (for which only a monolingual corpus is available). The method serves as a counterpoint to existing joint multilingual models, as it works by aligning new lexical parameters to a monolingually trained deep model.
As illustrated in Figure 1, our proposed method consists of four steps:
Pre-train a monolingual BERT (i.e. a transformer) in with masked language modeling (MLM) and next sentence prediction (NSP) objectives on an unlabeled corpus.
Transfer the model to a new language by learning new token embeddings while freezing the transformer body with the same training objectives (MLM and NSP) on an unlabeled corpus.
Fine-tune the transformer for a downstream task using labeled data in , while keeping the token embeddings frozen.
Zero-shot transfer the resulting model to by swapping the token embeddings with the embeddings learned in Step 2.
We note that, unlike mBERT, we use a separate subword vocabulary for each language, which is trained on its respective monolingual corpus, so the model has no notion of shared subwords. However, the special [CLS], [SEP], [MASK], [PAD]
[PAD], and [UNK] symbols are shared across languages, and fine-tuned in Step 3.
We observe further improvements on several downstream tasks using the following extensions to the above method.
The basic approach does not take into account different word orders commonly found in different languages, as it reuses the position embeddings in for . We relax this restriction by learning a separate set of position embeddings for in Step 2 (along with token embeddings).222We also freeze the position embeddings in Step 3 accordingly, and the position embeddings are plugged in together with the token embeddings in Step 4. We treat the [CLS]symbol as a special case. In the original implementation, BERT treats [CLS]as a regular word with its own position and segment embeddings, even if it always appears in the first position. We observe that this position embedding does not provide any extra capacity to the model, as it is always added up to the [CLS]embedding. Following this observation, we do not use any position and segment embeddings for the [CLS]symbol.
The transformer body in our proposed method is only trained with embeddings as its input layer, but is used with
embeddings at test time. To make the model more robust to this mismatch, we add Gaussian noises sampled from the standard normal distribution to the word, position, and segment embeddingsduring the fine-tuning step (Step 3).
We also investigate the possibility of allowing the model to learn better deep representations of , while retaining the alignment with using residual adapters (Rebuffi et al., 2017). Adapters are small task-specific bottleneck layers that are added between layers of a pre-trained model. During fine-tuning, the original model parameters are frozen, and only parameters of the adapter modules are learned. In Step 2, when we transfer the transformer to , we add a feed-forward adapter module after the projection following multi-headed attention and after the two feed-forward layers in each transformer layer, similar to Houlsby et al. (2019). Note that the original transformer body is still frozen, and only parameters of the adapter modules are trainable (in addition to the embedding matrix in ).
Our goal is to evaluate the performance of different multilingual models in the zero-shot cross-lingual setting to better understand the source of their generalization ability. We describe the models that we compare (§3.1), the experimental setting (§3.2), and the results on three classification datasets: XNLI (§3.3), MLDoc (§3.4) and PAWS-X (§3.5). We discuss experiments on our new XQuAD dataset in §4. In all experiments, we fine-tune a pre-trained model using labeled training examples in English, and evaluate on test examples in other languages via zero-shot transfer.
We compare four main models in our experiments:
A multilingual BERT model trained jointly on 15 languages333We use all languages that are included in XNLI Conneau et al. (2018b).. This model is analogous to mBERT and closely related to other variants like XLM.
A multilingual BERT model trained jointly on two languages (English and another language). This serves to control the effect of having multiple languages in joint training. At the same time, it provides a joint system that is directly comparable to the monolingual transfer approach in §2, which also operates on two languages.
The method we described in §2 operates at the lexical level, and can be seen as a form of learning cross-lingual word embeddings that are aligned to a monolingual transformer body. In contrast to this approach, standard cross-lingual word embedding mappings first align monolingual lexical spaces and then learn a multilingual deep model on top of this space. We also include a method based on this alternative approach where we train skip-gram embeddings for each language, and map them to a shared space using VecMap (Artetxe et al., 2018).444We use the orthogonal mode in VecMap and map all languages into English. We then train an English BERT model using MLM and NSP on top of the frozen mapped embeddings. The model is then fine-tuned using English labeled data while keeping the embeddings frozen. We zero-shot transfer to a new language by plugging in its respective mapped embeddings.
Our method described in §2. We use English as and try multiple variants with different extensions.
|Joint Multi||32k voc||79.0||71.5||72.2||68.5||66.7||66.9||66.5||58.4||64.4||66.0||62.3||66.4||59.1||50.4||56.9||65.0|
|Joint Pair||Joint voc||82.2||74.8||76.4||73.1||72.0||71.8||70.2||67.9||68.5||71.4||67.7||70.8||64.5||64.2||60.6||70.4|
|Mono Trans||Token emb||83.1||73.3||73.9||71.0||70.3||71.5||66.7||64.5||66.6||68.2||63.9||66.9||61.3||58.1||57.3||67.8|
|+ pos emb||83.8||74.3||75.1||71.7||72.6||72.8||68.8||66.0||68.6||69.8||65.7||69.7||61.1||58.8||58.3||69.1|
We perform subword tokenization using the unigram model in SentencePiece (Kudo and Richardson, 2018). In order to understand the effect of sharing subwords across languages and the size of the vocabulary, we train each model with various settings. We train 4 different JointMulti models with a vocabulary of 32k, 64k, 100k, and 200k subwords. For JointPair, we train one model with a joint vocabulary of 32k subwords, learned separately for each language pair, and another one with a disjoint vocabulary of 32k subwords per language, learned on its respective monolingual corpus. The latter is directly comparable to MonoTrans in terms of vocabulary, in that it is restricted to two languages and uses the exact same disjoint vocabulary with 32k subwords per language. For CLWE, we use the same subword vocabulary and investigate two choices: (i) the number of embedding dimensions—300d (the standard in the cross-lingual embedding literature) and 768d (equivalent to the rest of the models); and (ii) the self-learning initialization—weakly supervised (based on identically spelled words, Søgaard et al., 2018) and unsupervised (based on the intralingual similarity distribution).
We use Wikipedia as our training corpus, similar to mBERT and XLM (Lample and Conneau, 2019), which we extract using the WikiExtractor tool.555https://github.com/attardi/wikiextractor We do not perform any lowercasing or normalization. When working with languages of different corpus sizes, we use the same upsampling strategy as Lample and Conneau (2019) for both the subword vocabulary learning and the pre-training.
Our implementation is based on the BERT code from Devlin et al. (2019). For adapters, we build on the code by Houlsby et al. (2019). We use the model architecture of BERT, similar to mBERT. We use the LAMB optimizer You et al. (2019)
and train on 64 TPUv3 chips for 250,000 steps using the same hyperparameters asYou et al. (2019). We describe other training details in Appendix A. Our hyperparameter configuration is based on preliminary experiments on the development set of the XNLI dataset. We did not perform any exhaustive hyperparameter search, and use the exact same settings for all model variants, languages, and tasks.
We perform a single training and evaluation run for each model, and report results in the corresponding test set for each downstream task. For MonoTrans, we observe stability issues when learning language-specific position embeddings for Greek, Thai and Swahili. The second step would occasionally fail to converge to a good solution. For these three languages, we run Step 2 three times and pick the best model on the XNLI development set.
In natural language inference (NLI), given two sentences (a premise and a hypothesis), the goal is to decide whether there is an entailment, contradiction, or neutral relationship between them (Bowman et al., 2015). We train all models on the MultiNLI dataset Williams et al. (2018) in English and evaluate on XNLI Conneau et al. (2018b)—a cross-lingual NLI dataset consisting of 2,500 development and 5,000 test instances translated from English into 14 languages.
We report our results on XNLI in Table 1 together with the previous results from mBERT and XLM.666mBERT covers 102 languages and has a shared vocabulary of 110k subwords. XLM covers 15 languages and uses a larger model size with a shared vocabulary of 95k subwords, which contributes to its better performance. We summarize our main findings below:
Our JointMulti results are comparable with similar models reported in the literature. Our best JointMulti model is substantially better than mBERT, and only one point worse (on average) than the unsupervised XLM model, which is larger in size.
Among the tested JointMulti variants, we observe that using a larger vocabulary size has a notable positive impact.
JointPair models with a joint vocabulary perform comparably with JointMulti. This shows that modeling more languages does not affect the quality of the learned representations (evaluated on XNLI).
The equivalent JointPair models with a disjoint vocabulary for each language perform better, which demonstrates that a shared subword vocabulary is not necessary for joint multilingual pre-training to work.
CLWE performs poorly. Even if it is competitive in English, it does not transfer as well to other languages. Larger dimensionalities and weak supervision improve CLWE, but its performance is still below other models.
The basic version of MonoTrans is only 2.5 points worse on average than the best model. Language-specific position embeddings and noised fine-tuning further reduce the gap to only 1 point. Adapters mostly improve performance, except for low-resource languages such as Urdu, Swahili, Thai, and Greek.
In subsequent experiments, we include results for all variants of MonoTrans and JointPair, the best CLWE variant (768d ident), and JointMulti with 32k and 200k voc. We include full results for all model variants in Appendix C.
|Joint Multi||32k voc||92.6||81.7||75.8||85.4||71.5||66.6||78.9||91.9||83.8||83.3||82.6||75.8||83.5|
|Joint Pair||Joint voc||93.1||81.3||74.7||87.7||71.5||80.7||81.5||93.3||86.1||87.2||86.0||79.9||86.5|
|Mono Trans||Token emb||93.5||84.0||76.9||88.7||60.6||83.6||81.2||93.6||87.0||87.1||84.2||78.2||86.0|
|+ pos emb||93.6||79.7||75.7||86.6||61.6||83.0||80.0||94.3||87.3||87.6||86.3||79.0||86.9|
In MLDoc (Schwenk and Li, 2018)
, the task is to classify documents into one of four different genres:corporate/industrial, economics, government/social, and markets. The dataset is an improved version of the Reuters benchmark (Klementiev et al., 2012), and consists of 1,000 training and 4,000 test documents in 7 languages.
We show the results of our MLDoc experiments in Table 2. In this task, we observe that simpler models tend to perform better, and the best overall results are from CLWE. We believe that this can be attributed to: (i) the superficial nature of the task itself, as a model can rely on a few keywords to identify the genre of an input document without requiring any high-level understanding and (ii) the small size of the training set. Nonetheless, all of the four model families obtain generally similar results, corroborating our previous findings that joint multilingual pre-training and a shared vocabulary are not needed to achieve good performance.
PAWS is a dataset that contains pairs of sentences with a high lexical overlap (Zhang et al., 2019). The task is to predict whether each pair is a paraphrase or not. While the original dataset is only in English, PAWS-X (Yang et al., 2019) provides human translations into six languages.
We evaluate our models on this dataset and show our results in Table 2. Similar to experiments on other datasets, MonoTrans is competitive with the best joint variant, with a difference of only 0.6 points when we learn language-specific position embeddings.
Our classification experiments demonstrate that MonoTrans is competitive with JointMulti and JointPair, despite being multilingual at the embedding layer only (i.e. the transformer body is trained exclusively on English). One possible hypothesis for this behaviour is that existing cross-lingual benchmarks are flawed and solvable at the lexical level. For example, previous work has shown that models trained on MultiNLI—from which XNLI was derived—learn to exploit superficial cues in the data Gururangan et al. (2018).
To better understand the cross-lingual generalization ability of these models, we create a new Cross-lingual Question Answering Dataset (XQuAD). Question answering is a classic probe for natural language understanding Hermann et al. (2015) and has been shown to be less susceptible to annotation artifacts than other popular tasks Kaushik and Lipton (2018). In contrast to existing classification benchmarks, question answering requires identifying relevant answer spans in longer context paragraphs, thus requiring some degree of structural transfer across languages.
XQuAD consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1777We choose SQuAD 1.1 to avoid translating unanswerable questions. together with their translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Both the context paragraphs and the questions are translated by professional human translators from Gengo888https://gengo.com. In order to facilitate easy annotations of answer spans, we choose the most frequent answer for each question and mark its beginning and end in the context paragraph using special symbols, instructing translators to keep these symbols in the relevant positions in their translations. Appendix B discusses the dataset in more details.
|Joint Multi||32k voc||79.3||59.5||60.3||49.6||59.7||42.9||52.3||53.6||49.3||50.2||42.3||54.5|
|Joint Pair||Joint voc||82.8||68.3||73.6||58.8||69.8||53.8||65.3||69.5||56.3||58.8||57.4||64.9|
|Mono Trans||Token emb||83.9||67.9||62.1||63.0||64.2||51.2||61.0||64.1||52.6||51.4||50.9||61.1|
|+ pos emb||84.7||73.1||65.9||66.5||66.2||16.2||59.5||65.8||51.5||56.4||19.3||56.8|
We show scores on XQuAD in Table 3 (we include exact match scores in Appendix C). Similar to our findings in the XNLI experiment, the vocabulary size has a large impact in JointMulti, and JointPair models with disjoint vocabularies perform the best. The gap between MonoTrans and joint and models is larger, but MonoTrans still performs surprisingly well given the nature of the task. We observe that learning language-specific position embeddings is helpful in most cases, but completely fails for Turkish and Hindi. Interestingly, the exact same pre-trained models (after Steps 1 and 2) do obtain competitive results in XNLI (§3.3). In contrast to results on previous tasks, adding adapters to allow a transferred monolingual model to learn higher level abstractions in the new language significantly improves performance, resulting in a MonoTrans model that is comparable to the best joint system.
We demonstrate that sharing subwords across languages is not necessary for mBERT to work, contrary to a previous hypothesis by Pires et al. (2019). We also do not observe clear improvements by scaling the joint training to a large number of languages.
Rather than having a joint vs. disjoint vocabulary or two vs. multiple languages, we find that an important factor is the effective vocabulary size per language. When using a joint vocabulary, only a subset of the tokens is effectively shared, while the rest tends to occur in only one language. As a result, multiple languages compete for allocations in the shared vocabulary. We observe that multilingual models with larger vocabulary sizes obtain consistently better results. It is also interesting that our best results are generally obtained by the JointPair systems with a disjoint vocabulary, which guarantees that each language is allocated 32k subwords. As such, we believe that future work should treat the effective vocabulary size as an important factor.
MonoTrans is competitive even in the most challenging scenarios. This indicates that joint multilingual pre-training is not essential for cross-lingual generalization, suggesting that monolingual models learn linguistic abstractions that generalize across languages.
To get a better understanding of this phenomenon, we probe the representations of MonoTrans. As existing probing datasets are only available in English, we train monolingual representations in non-English languages and transfer them to English. We probe representations from the resulting English models with the Word in Context (WiC; Pilehvar and Camacho-Collados, 2019), Stanford Contextual Word Similarity (SCWS; Huang et al., 2012), and the syntactic evaluation (Marvin and Linzen, 2018) datasets.
We provide details of our experimental setup in Appendix D and show a summary of our results in Table 4. The results indicate that monolingual semantic representations learned from non-English languages transfer to English to a degree. On WiC, models transferred from non-English languages are comparable with models trained on English. On SCWS, while there are more variations, models trained on other languages still perform surprisingly well. In contrast, we observe larger gaps in the syntactic evaluation dataset. This suggests that transferring syntactic abstractions is more challenging than semantic abstractions. We leave a more thorough investigation of whether joint multilingual pre-training reduces to learning a lexical-level alignment for future work.
CLWE models—although similar in spirit to MonoTrans—are only competitive on the easiest and smallest task (MLDoc), and perform poorly on the more challenging ones (XNLI and XQuAD). While previous work has questioned evaluation methods in this research area (Glavaš et al., 2019; Artetxe et al., 2019), our results provide evidence that existing methods are not competitive in challenging downstream tasks and that mapping between two fixed embedding spaces may be overly restrictive. For that reason, we think that designing better integration techniques of CLWE to downstream models is an important future direction.
Humans learn continuously and accumulate knowledge throughout their lifetime. Existing multilingual models focus on the scenario where all training data for all languages is available in advance. The setting to transfer a monolingual model to other languages is suitable for the scenario where one needs to incorporate new languages into an existing model, while no longer having access to the original data. Our work provides an insight to design a multilingual lifelong learning model. Such a scenario is of significant practical interest, since models are often released without the data they were trained on.
A common approach to learn multilingual representations is based on cross-lingual word embedding mappings. These methods learn a set of monolingual word embeddings for each language and map them to a shared space through a linear transformation. Recent approaches perform this mapping with an unsupervised initialization based on heuristics(Artetxe et al., 2018) or adversarial training (Zhang et al., 2017; Conneau et al., 2018a), which is further improved through self-learning (Artetxe et al., 2017). The same approach has also been adapted for contextual representations Schuster et al. (2019).
In contrast to the previous approach, which learns a shared multilingual space at the lexical level, state-of-the-art methods learn deep representations with a transformer. Most of these methods are based on mBERT. Extensions to mBERT include scaling it up and incorporating parallel data Lample and Conneau (2019), adding auxiliary pre-training tasks Huang et al. (2019), and encouraging representations of translations to be similar Anonymous (2019c).
Concurrent to this work, Anonymous (2019b) propose a more complex approach to transfer a monolingual BERT to other languages that achieves results similar to ours. However, they find that post-hoc embedding learning from a random initialization does not work well. In contrast, we show that monolingual representations generalize well to other languages and that we can transfer to a new language by learning new subword embeddings. Concurrent to our work, Anonymous (2019a) also show that a shared vocabulary is not important for learning multilingual representations.
We compared state-of-the-art multilingual representation learning models and a monolingual model that is transferred to new languages at the lexical level. We demonstrated that these models perform comparably on standard zero-shot cross-lingual transfer benchmarks, indicating that neither a shared vocabulary nor joint pre-training are necessary in multilingual models. We also showed that a monolingual model trained on a particular language learns some semantic abstractions that are generalizable to other languages in a series of probing experiments. Our results and analysis contradict previous theories and provide new insights into the basis of the generalization abilities of multilingual models. To provide a more comprehensive benchmark to evaluate cross-lingual models, we also released the Cross-lingual Question Answering Dataset (XQuAD).
We thank Chris Dyer for helpful comments on an earlier draft of this paper and Tyler Liechty for assistance with datasets.
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790–2799, Long Beach, California, USA. PMLR.
In contrast to You et al. (2019), we train with a sequence length of 512 from the beginning, instead of dividing training into two stages. For our proposed approach, we pre-train a single English model for 250k steps, and perform another 250k steps to transfer it to every other language.
For the fine-tuning, we use Adam with a learning rate of 2e-5, a batch size of 32, and train for 2 epochs. The rest of the hyperparameters followDevlin et al. (2019). For adapters, we follow the hyperparameters employed by Houlsby et al. (2019)
. For our proposed model using noised fine-tuning, we set the standard deviation of the Gaussian noise to 0.075 and the mean to 0.
XQuAD consists of a subset of 240 context paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their translations into 10 other languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Table 5 comprises some statistics of the dataset, while Table 6 shows one example from it.
So as to guarantee the diversity of the dataset, we selected 5 context paragraphs at random from each of the 48 documents in the SQuAD 1.1 development set, and translate both the context paragraphs themselves as well as all their corresponding questions. The translations were done by professional human translators through the Gengo999https://gengo.com service. The translation workload was divided into 10 batches for each language, which were submitted separately to Gengo. As a consequence, different parts of the dataset might have been translated by different translators. However, we did guarantee that all paragraphs and questions from the same document were submitted in the same batch to make sure that their translations were consistent. Translators were specifically instructed to transliterate all named entities to the target language following the same conventions used in Wikipedia, from which the English context paragraphs in SQuAD originally come.
In order to facilitate easy annotations of answer spans, we chose the most frequent answer for each question and marked its beginning and end in the context paragraph through placeholder symbols (e.g. “this is *0* an example span #0# delimited by placeholders”). Translators were instructed to keep the placeholders in the relevant position in their translations, and had access to an online validator to automatically verify that the format of their output was correct.
We show the complete results for cross-lingual word embedding mappings and joint multilingual training on MLDoc and PAWS-X in Table 7. Table 8 reports exact match results on XQuAD, while Table 9 reports results for all cross-lingual word embedding mappings and joint multilingual training variants.
|Joint Multi||32k voc||92.6||81.7||75.8||85.4||71.5||66.6||78.9||91.9||83.8||83.3||82.6||75.8||83.5|
|Joint Multi||32k voc||68.3||41.3||44.3||31.8||45.0||28.5||36.2||36.9||39.2||40.1||27.5||39.9|
|Joint Pair||Joint voc||71.7||47.8||57.6||38.2||53.4||35.0||47.4||49.7||44.3||47.1||38.8||48.3|
|Mono Trans||Subword emb||72.3||47.4||42.4||43.3||46.4||30.1||42.6||45.1||39.0||39.0||32.4||43.6|
|+ pos emb||72.9||54.3||48.4||47.3||47.6||6.1||41.1||47.6||38.6||45.0||9.0||41.6|
|Joint Multi||32k voc||79.3||59.5||60.3||49.6||59.7||42.9||52.3||53.6||49.3||50.2||42.3||54.5|
As probing tasks are only available in English, we train monolingual models in each of XNLI and then align them to English. To control for the amount of data, we use 3M sentences both for pre-training and alignment in every language.101010We leave out Thai, Hindi, Swahili, and Urdu as their corpus size is smaller than 3M.
datasets. WiC is a binary classification task, which requires the model to determine if the occurrences of a word in two contexts refer to the same or different meanings. SCWS requires estimating the semantic similarity of word pairs that occur in context. For WiC, we train a linear classifier on top of the fixed sentence pair representation. For SCWS, we obtain the contextual representations of the target word in each sentence by averaging its constituent word pieces, and calculate their cosine similarity.
We evaluate the same models in the syntactic probing dataset of Marvin and Linzen (2018) following the same setup as Goldberg (2019). Given minimally different pairs of English sentences, the task is to identify which of them is grammatical. Following Goldberg (2019)
, we feed each sentence into the model masking the word in which it differs from its pair, and pick the one to which the masked language model assigns the highest probability mass. Similar toGoldberg (2019), we discard all sentence pairs from the Marvin and Linzen (2018) dataset that differ in more than one subword token. Table 10 reports the resulting coverage split into different categories, and we show the full results in Table 11.
|Simple||80 / 140 (57.1%)|
|In a sentential complement||960 / 1680 (57.1%)|
|Short VP coordination||480 / 840 (57.1%)|
|Long VP coordination||320 / 400 (80.0%)|
|Across a prepositional phrase||15200 / 22400 (67.9%)|
|Across a subject relative clause||6400 / 11200 (57.1%)|
|Across an object relative clause||17600 / 22400 (78.6%)|
|Across an object relative (no that)||17600 / 22400 (78.6%)|
|In an object relative clause||5600 / 22400 (25.0%)|
|In an object relative (no that)||5600 / 22400 (25.0%)|
|Simple||280 / 280 (100.0%)|
|In a sentential complement||3360 / 3360 (100.0%)|
|Across a relative clause||22400 / 22400 (100.0%)|
|In a sentential complement||99.0||65.7||94.0||92.1||62.7||98.3||80.7||74.1||89.7||71.5||78.9||79.6||80.7|
|Short VP coordination||100.0||64.8||66.9||69.8||64.4||77.9||60.2||88.8||76.7||73.3||62.7||64.4||70.0|
|Long VP coordination||96.2||58.8||53.4||60.0||67.5||62.5||59.4||92.8||62.8||75.3||62.5||64.4||65.4|
|Across a prepositional phrase||89.7||56.9||54.6||52.8||53.4||53.4||54.6||79.6||54.3||59.9||57.9||56.5||57.6|
|Across a subject relative clause||91.6||49.9||51.9||48.3||52.0||53.2||56.2||78.1||48.6||58.9||55.4||52.3||55.0|
|Across an object relative clause||79.2||52.9||56.2||53.3||52.4||56.6||57.0||63.1||52.3||59.0||54.9||54.5||55.7|
|Across an object relative (no that)||77.1||54.1||55.9||55.9||53.1||56.2||59.7||63.3||53.1||54.9||55.9||56.8||56.3|
|In an object relative clause||74.6||50.6||59.9||66.4||59.4||61.1||49.8||60.4||42.6||45.3||56.9||56.3||55.3|
|In an object relative (no that)||66.6||51.7||57.1||64.9||54.9||59.4||49.9||57.0||43.7||46.6||54.9||55.4||54.1|
|In a sentential complement||82.0||56.3||63.9||73.2||52.7||65.7||59.1||70.8||71.7||84.5||59.8||53.9||64.7|
|Across a relative clause||65.6||55.0||54.5||58.6||52.3||55.8||52.5||66.1||61.4||73.3||56.9||50.9||57.9|