One of the most promising directions for cross-lingual dependency parsing, which also remains a challenge, is to bridge the gap of lexical features. Prior works W14-1613; guo-EtAl:2015:ACL-IJCNLP2 have shown that cross-lingual word embeddings are able to significantly improve the transfer performance compared to delexicalized models mcdonald2011multi; mcdonald2013universal. These cross-lingual word embeddings are static in the sense that they do not change with the context.222In this paper, we refer to these embeddings as static as opposed to contextualized ones.
Recently, contextualized word embeddings derived from large-scale pre-trained language models NIPS2017_7209; peters2017semi; peters2018deep; devlin2018bert have demonstrated dramatic superiority over traditional static word embeddings, establishing new state-of-the-arts in various monolingual NLP tasks suzana2018deep; schuster2018cross. The success has also been recognized in dependency parsing che2018towards. The great potential of these contextualized embeddings has inspired us to extend its power to cross-lingual scenarios.
Several recent works have been proposed to learn contextualized cross-lingual embeddings by training cross-lingual language models from scratch with parallel data as supervision, and has been demonstrated effective in several downstream tasks schuster2018cross; mulcaire2019polyglot; lample2019cross. These methods are typically resource-demanding and time-consuming.333For instance, XLM was trained on 64 Volta GPUs lample2019cross. While the time of training is not described in the paper, we may take the statistics from BERT as a reference, e.g., BERT was trained on 4 Cloud TPUs for 4 days devlin2018bert. In this paper, we propose Cross-Lingual BERT Transformation (CLBT), a simple and efficient off-line approach that learns a linear transformation from contextual word alignments. With CLBT, contextualized embeddings from pre-trained BERT models in different languages are projected into a shared semantic space. The learned transformation is then used on top of the BERT encodings for each sentence, which are further fed as input to a parser.
Our approach utilizes the semantic equivalence in word alignments, and thus is supposed to be word sense-preserving. Figure 1 illustrates our approach, where contextualized embeddings of the Spanish word “canal” are transformed to the corresponding semantic space in English.
Experiments on the Universal Dependencies (UD) treebanks (v2.2) nivre2018ud show that our approach substantially outperforms previous models that use static cross-lingual embeddings, with an absolute gain of 2.91% in averaged LAS. We further compare to XLM lample2019cross, a recently proposed large-scale cross-lingual language model. Results demonstrate that our approach requires significantly fewer training data, computing resources and less training time than XLM, yet achieving highly competitive results.
2 Related Work
Static cross-lingual embedding learning methods can be roughly categorized as on-line and off-line methods. Typically, on-line approaches integrate monolingual and cross-lingual objectives to learn cross-lingual word embeddings in a joint manner C12-1089; P14-2037; guo2016representation, while off-line approaches take pre-trained monolingual word embeddings of different languages as input and retrofit them into a shared semantic space xing2015normalized; lample2018word; chen2018unsupervised.
Several approaches have been proposed recently to connect the rich expressiveness of contextualized word embeddings with cross-lingual transfer. mulcaire2019polyglot based their model on ELMo peters2018deep and proposed a polyglot contextual representation model by capturing character-level information from multilingual data. lample2019cross adapted the objectives of BERT devlin2018bert to incorporate cross-lingual supervision from parallel data to learn cross-lingual language models (XLMs), which have obtained state-of-the-art results on several cross-lingual tasks. Similar to our approach, schuster2019cross also aligned pre-trained contextualized word embeddings through linear transformation in an off-line fashion. They used the averaged contextualized embeddings as an anchor for each word type, and learn a transformation in the anchor space. Our approach, however, learns this transformation directly in the contextual space, and hence is explicitly designed to be word sense-preserving.
3 Cross-Lingual BERT Transformation
This section describes our proposed approach, namely CLBT, to transform pre-trained monolingual contextualized embeddings to a shared semantic space.
3.1 Contextual Word Alignment
Traditional methods of learning static cross-lingual word embeddings have been relying on various sources of supervision such as bilingual dictionaries lazaridou2015hubness; smith2017offline, parallel corpus guo-EtAl:2015:ACL-IJCNLP2 or on-line Google Translate mikolov2013exploiting; xing2015normalized. To learn contextualized cross-lingual word embeddings, however, we require supervision at word token-level (or context-level) rather than type-level (i.e. dictionaries). Therefore, we assume a parallel corpus as our supervision, analogous to on-line methods such as XLM lample2019cross.
In our approach, unsupervised bidirectional word alignment is first applied to the parallel corpus to obtain a set of aligned word pairs with their contexts, or contextual word pairs for short. For one-to-many and many-to-one alignments, we use the left-most aligned word,444Preliminary experiments indicate that this way works better than keeping all the alignments. such that all the resulting word pairs are one-to-one. In practice, since WordPiece embeddings wu2016google are used in BERT, all the parallel sentences are tokenized using BERT’s wordpiece vocabulary before being aligned.
3.2 Off-Line Transformation
Given a set of contextual word pairs, their BERT representations can be easily obtained from pre-trained BERT models,555In this work, we use the English BERT (enBERT) for the source language (English) and the multilingual BERT (mBERT), which is trained on 102 languages without cross-lingual supervision, for all the target languages. where is the contextualized embedding of token in the target language, and is the representation of its alignment in the source language.
In our experiments, a parser is trained on source language data and applied directly to all the target languages. Therefore, we propose to project the embeddings of target languages to the space of the source language, instead of the opposite direction. Specifically, we aim at finding an appropriate linear transformation , such that approximates .666We also investigated non-linear transformation in our experiments, but didn’t observe any improvements. This can be achieved by solving the following optimization problem:
where is a parameter matrix.
Previous works on static cross-lingual embeddings have shown that an orthogonal (i.e. ) is helpful for the word translation task xing2015normalized
. In this case, an analytical solution can be found through singular value decomposition (SVD) of:
Here and are the contextualized embedding matrices, where is the number of aligned contextual word pairs, is the dimension of monolingual contextualized embeddings. Each pair of rows indicates an aligned contextual word pair.
Although this can be computed in CPUs within several minutes, more memories will be required with the growth of the amount of training data. Therefore, we present an approximate solution, where is optimized with gradient decent (GD) and is not constrained to be orthogonal.777We found the orthogonal constraint doesn’t help for GD. This GD-based approach can be trained on a single GPU and typically converges in several hours.
To validate the effectiveness of our approach in cross-lingual dependency parsing, we first obtain the CLBT embeddings with the proposed approach, and then use them as input to a modern graph-based neural parser (described in next section), in replacement of the pre-trained static embeddings. Note that BERT produces embeddings in wordpiece-level, so we only use the left-most wordpiece embedding of each word.888We tried alternative strategies such as averaging, using the middle or right-most wordpiece, but observed no significant difference.
4.1 Data and Settings
In our experiments, the contextual word pairs are obtained from the Europarl corpora koehn2005epc using the fast_align toolkit dyer2010cdec. Only 10,000 sentence pairs are used for each target language. For the parsing datasets, we use the Universal Dependencies(UD) Treebanks (v2.2) nivre2018ud,999hdl.handle.net/11234/1-2837 following the settings of the previous state-of-the-art system ahmad2018near. From the 31 languages they have analyzed, we select 18 whose Europarl data is publicly available.101010For languages with multiple treebanks, we use the same combinations as they did. Statistics of the selected languages and treebanks can be found in the Appendix. We employ the Biaffine Graph-based Parser of dozat2017deep and adopt their hyper-parameters for all of our models.
In all the experiments, English is used as the source language, and the other 17 languages as targets. The model is trained on the English treebank and applied directly to target languages with the transformed contextualized embeddings. We train our models using the Adam optimizer kingma2015adam
, and most of the them converge within a few thousand epochs in several hours. More implementation details are reported in the Appendix.
4.2 Baseline Systems
We compare our method with the following three baseline models:
mBERT (contextualized). Embeddings generated by the mBERT model are directly used in the training and testing procedures.
FT-SVD (ahmad2018near, off-line, static). SVD-based transformation smith2017offline is applied on 300-dimensional FastText embeddings bojanowski2017enriching to obtain cross-lingual static embeddings, which represents the previous state-of-the-art. We report results from their paper of the RNNGraph model which used the same architecture as ours.
XLM (lample2019cross, on-line, contextualized). A strong method which learns contextualized cross-lingual embeddings from scratch with cross-lingual data.
For the XLM model, we employ the XNLI-15 model111111github.com/facebookresearch/XLM they released to generate embeddings and apply them to cross-lingual dependency parsing in the same way as we do with our own model. We compare with them in the 4 overlapped languages both works have researched on.
4.3 Comparison with Off-Line Methods
|FT-SVD||mBERT||CLBT (SVD)||CLBT (GD)|
Results on the test sets are shown in Table 1.121212UAS results are listed in the Appendix due to space limit. Note that since we have no access to the parsed files of the FT-SVD model, we only report statistical significant tests between our methods and the mBERT model, which is highly comparable to the FT-SVD model on average. Languages are grouped by language families. Overall, our approach with either SVD or GD outperforms both FT-SVD and mBERT by a substantial margin (+2.91% in averaged LAS), among which GD turns out to be slightly better than SVD in most of the languages. When combined with FT-SVD, the performances can be further improved by 0.33% in LAS for the GD method and 0.51% for SVD (see the Appendix for more details). Interestingly, the mBERT model which is trained without any cross-lingual supervision but using a shared multilingual wordpiece vocabulary works surprisingly well in some languages, especially in those linguistically close to English. Similar observations have also been identified in other works pires2019multilingual; wu2019beto.
|Lan.||XLM||CLBT (SVD)||CLBT (GD)|
4.4 Comparison with On-Line Methods
Comparison of our approach and a cross-lingual language model pre-training (XLM) method lample2019cross in the 4 overlapped languages is shown in Table 2. CLBT outperforms XLM in 3 out of the 4 languages but lower in German (de). The amount of training data used in each method is also shown in the bottom: the number of parallel sentences used by XLM ranges from 0.2 million (10 million tokens) for Bulgarian to 13.1 million (682 million tokens) for French. In comparison, only 10,000 parallel sentences (0.4 million tokens) are used for each language in CLBT, demonstrating the data-efficiency of our approach. Moreover, given the efficiency in both data and training, CLBT can be readily scaled to new language pairs in hours.
4.5.1 Transformation of Cross-lingual BERT Embedding
In order to investigate the properties of contextualized representations before and after the linear transformation, we employ the SENSEVAL2 data edmonds2001senseval2,131313www.hipposmond.com/senseval2/ where words from different languages are tagged by their word senses in different contexts.
We took contextualized representations of the English word nature and its Spanish translation naturaleza in different contexts from pre-trained English and multilingual BERT respectively and visualize their distributions in Figure 2, where we can observe obvious clustering of word senses. Specifically, words with sense nature-1 and naturaleza-1 mean the physical world, whereas nature-2 and naturaleza-2 mean inherent features. We then apply our GD-based method to embeddings of naturaleza and depict the resulting cross-lingual embeddings in Figure 2. The distance between embeddings from English and Spanish is effectively reduced after the transformation. And it is apparent that embeddings of Spanish words are closer to those with similar meanings from English, which indicates the effectiveness of our approach.
4.5.2 Effect of Training Data Size
We select several languages from each language family, and investigate the effect of the amount of training data on the performances of zero-shot cross-lingual dependency parsing. Specifically, we take the SVD-based approach, since it is faster than the GD-based one, and trained different transformation models with different amount of parallel sentences from Europarl dataset on each of the 13 selected languages.
As shown in Figure 3, for most of the languages, the best performance is achieved with only 5000 parallel sentences. It is also worth noting that for most of Germanic (e.g. German, Danish, Swedish and Dutch) and Romance (e.g. French, Italian, Spanish and Romanian) languages, which are typologically closer to English, a rather small training set of merely 100 sentences is capable of achieving comparative results.
We propose the Cross-Lingual BERT Transformation (CLBT) approach for contextualized cross-lingual embedding learning, which substantially outperforms the previous state-of-the-art in zero-shot cross-lingual dependency parsing. By exploiting publicly available pre-trained BERT models, our approach provides a fast and data-efficient solution to learning cross-lingual contextualized embeddings. Compared to the XLM, our method requires much fewer parallel data and less training time, yet achieving comparable performance.
For future work, we are interested in unsupervised cross-lingual alignment, inspired by prior success on static embeddings lample2018word; alvarez2018gromov, which demands a deeper understanding to the geometry of the multilingual contextualized embedding space.
We thank the anonymous reviewers for their valuable suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153.
Appendix A Appendices for “Cross-Lingual BERT Transformation for Zero-Shot Dependency Parsing”
a.1 Statistics of UD (v2.2) Treebanks
The statistics of the Universal Dependency treebanks we used are summarized in Table 3.
|Language||Language Family||Treebank||Test Sentences|
|Dutch (nl)||IE.Germanic||Alpino, LassySmall||1,472|
|Spanish (es)||IE.Romance||GSD, AnCora||2,147|
|Portuguese (pt)||IE.Romance||Bosque, GSD||1,681|
|Polish (pl)||IE.Slavic||LFG, SZ||2,827|
|Slovenian (sl)||IE.Slavic||SSJ, SST||1,898|
|Czech (cs)||IE.Slavic||PDT, CAC, CLTT, FicTree||12,203|
a.2 Implementation Details
For the graph-based Biaffine parser, we exclude the learned embeddings in our re-implementation, to focus on the effect of pre-trained embeddings. Besides, the universal POS tags are used throughout our experiments.
The PyTorch version of the base BERT model for English and multi-languages141414github.com/huggingface/pytorch-pretrained-BERT are used to generate the 768-dimensional contextualized embeddings for English and target languages respectively. In the GD-based method, we use Adam optimizer, with a learning rate of 0.001, , .
a.3 Full Results on UD Treebanks
The LAS of our models (including the combination of cross-lingual FastText embeddings and our CLBT ones, where they are concatenated as the input to the parser) and the baseline ones are shown in Table 4, and UAS in Table 5.
|FT-SVD||mBERT||CLBT (SVD)||CLBT (SVD) +FT||CLBT (GD)||CLBT (GD) +FT|
|FT-SVD||mBERT||CLBT (SVD)||CLBT (SVD) +FT||CLBT (GD)||CLBT (GD) +FT|