Near or Far, Wide Range Zero-Shot Cross-Lingual Dependency Parsing

11/01/2018 ∙ by Wasi Uddin Ahmad, et al. ∙ USC Information Sciences Institute Carnegie Mellon University 0

Cross-lingual transfer is the major means toleverage knowledge from high-resource lan-guages to help low-resource languages. In this paper, we investigate cross-lingual trans-fer across a broad spectrum of language dis-tances. We posit that Recurrent Neural Net-works (RNNs)-based encoders, since explic-itly incorporating surface word order, are brit-tle for transferring across distant languages,while self-attentive models are more flexibleon modeling word order information; thusthey would be more robust in the cross-lingualtransfer setting. We test our hypothesis bytraining dependency parsers on only Englishcorpus and evaluating them on 31 other lan-guages. With detailed analysis, we find inter-esting patterns showing that RNNs-based ar-chitectures can transfer well for languages thatare close to English, while self-attentive mod-els are have better cross-lingual transferabilityacross a wide range of languages.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Cross-lingual learning which explores knowledge transfer between different languages has been studied for various NLP tasks Guo et al. (2015); Zhou et al. (2016); Zoph et al. (2016); Kim et al. (2017). It is a challenging problem with tremendous practical values. On one hand, many NLP problems have achieved remarkable accuracy in resource-rich languages thanks to the availability of large-scale annotated data, while the performances on low-resource languages are still behind in the absence of abundant annotations. Cross-lingual techniques are excellent means to subsidize the requirement of annotated data by transferring the knowledge from resource-rich languages to low-resource ones. On the other hand, cross-lingual learning is challenging because different languages diverge significantly at levels of morphology, syntax, and semantics. It is challenging to learn language-invariant features that are robustly transferable for distant languages.

Prior work on cross-lingual transfer mainly focused on the word-level by inducing multi-lingual invariant word embeddings Xiao and Guo (2014); Guo et al. (2016); Sil et al. (2017). However, words are not independent in sentences; their interaction and combination forms larger linguistic units, known as context. Understanding context is vital for most problems in NLP as context influences a word’s meaning. A successful architecture for NLP usually contains mechanisms to contextualize words and compose higher-level features to understand other parts of sentences. We refer to the mechanisms as contextual encoders. In this paper, we explore transfer on the contextual encoder-level, where we consider how to induce combinatorial features that are language-invariant.

With the developments of representation learning and neural networks, Recurrent Neural Networks (RNNs) has become a prevalent encoder for many NLP tasks that demonstrated compelling performances McCann et al. (2017); Peters et al. (2018). However, we hypothesize, in cross-lingual setting, RNNs’ sequential nature introduces the risk of encoding language-specific word ordering information, thus overfit to specific word order. To test our hypothesis, we investigate the flexibility of contextual encoders for cross-lingual transfer.

Self-attention based architecture “Transformer” has been proposed recently and shown to be effective in various NLP tasks Vaswani et al. (2017); Liu et al. (2018); Kitaev and Klein (2018). While keeping a strong modeling capability, it is more flexible in capturing contextual information than RNNs since it does not explicitly model the ordering information although positional information is still indispensable for Transformer to be successful Vaswani et al. (2017). To this end, we explore flexible ways to utilize relative positional information for the Transformer encoder.

In this work, we quantify language distances in terms of word order typology and systematically study the transferability of two leading neural architectures as the contextual encoder. We evaluate the cross-lingual transferability of the contextual encoders on dependency parsing, primarily because of the availability of unified annotations across a broad spectrum of languages Nivre et al. (2018). Besides, word order typology found to influence dependency parsing Östling (2015). Moreover, parsing is a low-level NLP task Hashimoto et al. (2016) that can benefit many downstream applications McClosky et al. (2011); Gamallo et al. (2012); Jie et al. (2017).

We conduct the evaluations on 31 languages across a broad spectrum of language families, as shown in Table 1. Our empirical results show that attention based encoding and decoding performs better than the RNN-based ones, especially when the source and target languages are distant.

2 Quantifying Language Distances

Language Families Languages
Afro-Asiatic Arabic (ar), Hebrew (he)
Austronesian Indonesian (id)
IE.Baltic Latvian (lv)
IE.Germanic Danish (da), Dutch (nl), English (en), German (de), Norwegian (no), Swedish (sv)
IE.Indic Hindi (hi)
IE.Latin Latin (la)
IE.Romance Catalan (ca), French (fr), Italian (it), Portuguese (pt), Romanian (ro), Spanish (es)
IE.Slavic Bulgarian (bg), Croatian (hr), Czech (cs), Polish (pl), Russian (ru), Slovak (sk), Slovenian (sl), Ukrainian (uk)
Japanese Japanese (ja)
Korean Korean (ko)
Sino-Tibetan Chinese (zh)
Uralic Estonian (et), Finnish (fi)
Table 1: The selected languages grouped by language families. “IE” is the abbreviation of Indo-European.
Figure 1: Hierarchical clustering (with the Nearest Point Algorithm) dendrogram of the languages by their word-ordering vectors.
Figure 2: t-SNE visualization Maaten and Hinton (2008) of the languages by their word-ordering vectors.

Word order can be a significant distinctive feature to differentiate languages Dryer (2007). Since word order features can especially influence parsing Östling (2015), we wonder whether we can distinguish languages or design a “language distance” measure based on word order. We investigate this empirically by collecting the dependency direction statistics of various dependency types from the datasets.

For the datasets, we use the Universal Dependencies (UD) Treebanks (v2.2) Nivre et al. (2018) in this paper. We select 31 languages for evaluation and analysis, with a general selecting criterion that the overall token number in the treebanks of a language is over 100K. We group these languages by their language families in Table 1. Detailed statistical information of the selected languages and treebanks can be found in Appendix A.

We look at finer-grained dependency types than the 37 universal dependency labels111http://universaldependencies.org/u/dep/index.html in UD v2, by augmenting the dependency labels with the universal part-of-speech (POS) tags of the head and modifier nodes. We use triplets of “(ModifierPOS, HeadPOS, DependencyLabel)” for the augmented types. With this, we can investigate language differences in a fine-grained way. For example, “(PRON, VERB, obj)” denotes the dependency type between verbs and their pronoun objects, for which specific languages (like French) may have different ordering compared to the placement of plain noun objects.

We only select the most common types, since statistics on rare types can be unstable. The number of the selected types is 52 and please refer to Appendix E for details. On each dependency type, we collect the relative frequencies on the dependency direction222For dependency relations, there can be only two choices of direction: left (modifier before head) and right (modifier after head). We also tried finer-grained dependency distances, but found the simple binary directional one was good enough. for each language. By concatenating the directional relative frequencies of all concerned types, we can obtain a word-ordering vector for each language. In figure 1 and 2, we show the hierarchical clustering and t-SNE visualization Maaten and Hinton (2008)

with the word-ordering vectors of the 31 languages. By using word ordering information alone, we can almost recover the grouping of their original language families. It should be noted that the outliers, German(de) and Dutch(nl), are the ones that have verb-second (V2) word ordering and are different to other more English-like Germanic languages.

This indicates that word ordering is a major aspect of how languages differ and we can extract useful feature vectors from ordering information. We use these word-ordering vectors to define word-ordering distance measurements in our analysis (Section 4.3). Moreover, we take word ordering as a major factor in our model designs.

3 Model

Our basic setting is to transfer the knowledge of one source language syntax to target languages without any explicit target syntactic annotations. Therefore, when designing a model, we want it to capture less specific information of the source language, especially concerning word order.

With this in mind, we briefly describe the three major components for our cross-lingual dependency parser: (1) input lexicon representations, (2) contextual encoders, and (3) structured decoders.

3.1 Input Representations

The basis of cross-lingual transfer parsing is representing the inputs of different languages in a shared space so that models trained only in the source language can be transferable. Firstly, Universal POS tags in UD already provide valuable shared information about the inputs. Moreover, we can also obtain shared lexicalized information with multilingual embeddings. Recently, works on cross-lingual embeddings Smith et al. (2017); Conneau et al. (2018) show that word embeddings of different languages can be indeed mapped into a shared space. Therefore, we take these shared lexicon representations with multilingual embeddings and concatenate them with Universal POS embeddings, forming our final input vectors.

3.2 Contextual Encoders

In a sentence, words are not isolated, and they are complexly interacting with each other. In our preliminary experiments, we tried models without any contextual encoders, and the results for all target languages turned out to be much worse. This indicates the importance of encoders to capture shared contextual information.

Considering the sequential nature of languages, RNN can be a natural choice for encoding. However, modeling words one by one in the sequence inevitably encodes the word orders, some of which can be specific only to the source language. To alleviate this problem, we propose to adopt the purely self-attention based Transformer encoder Vaswani et al. (2017) in cross-lingual parsing. It is less sensitive to word order but not necessarily less potent at capturing contextual information, which makes it suitable in our setting. Furthermore, we adopt a modified version of relative position representations Shaw et al. (2018) to make the encoder capture less positional and ordering information further. In the following, we briefly describe the two encoder prototypes that we study.

RNN Encoder

Following previous works Kiperwasser and Goldberg (2016); Dozat and Manning (2017); Ma et al. (2018), we employ -layers of bidirectional LSTMs Hochreiter and Schmidhuber (1997) on top of the input vectors to obtain contextual representations.

Self-Attention Encoder

The original Transformer encoder takes absolute positional embeddings as inputs, which might still capture much positional and ordering information. To mitigate this, we utilize relative position representations instead. We further make a simple modification to it: the original relative position representations in Shaw et al. (2018) discriminate left and right contexts by adding signs to distances, while we only use absolute values of the distances and discard directional information. With this, the model only knows what words are surrounding but cannot tell the directions. We analyze these choices of position representations in Section 4.3.4, which shows that our strategy performs the best.

3.3 Structured Decoders

With the contextual representations from the encoder, the decoder predicts the output tree structures. Generally speaking, there have been mainly two classes of typical approaches for decoding McDonald and Nivre (2007): Transition-based and Graph-based. Intuitively, the first-order graph-based method can be less sensitive to word orders since it cares less about decoding directions. To empirically verify this, we investigate both of them with mono-lingual state-of-the-art models.

Graph-based Decoder

Graph-based method benefits from being able to search globally for the best structure with strong independence assumptions. Recently, with a deep biaffine attentional scorer, Dozat and Manning (2017) obtained state-of-the-art results with simple first-order factorization Eisner (1996); McDonald et al. (2005). For a graph-based decoder, we directly take this deep-biaffine-scorer based architecture.

Stack-Pointer Decoder

Transition-based decoders build the parse tree incrementally with a series of transition actions. Recently, Ma et al. (2018) proposed a top-down transition-based parsing method and also obtained state-of-the-art results. Thus, we select it as our transition-based decoder. To be noted, in the Stack-Pointer decoder, an RNN is recording the top-down decoding trajectory and can be also sensitive to word order. We will discuss this in the experiments.

4 Experiments and Analysis

Language TransformerGraph RNNGraph TransformerStack RNNStack Baseline Supervised
en 90.35/88.40 90.44/88.31 90.18/88.06 91.82/89.89 87.25/85.04 90.44/88.31
it 80.80/75.82 81.10/76.23 79.13/74.16 80.35/75.32 75.06/67.37 94.21/92.38
no 80.80/72.81 80.67/72.83 80.25/72.07 81.75/73.30 74.76/65.16 94.52/92.88
sv 80.98/73.17 81.23/73.49 80.56/72.77 82.57/74.25 71.84/63.52 89.79/86.60
fr 77.87/72.78 78.35/73.46 76.79/71.77 75.46/70.49 73.02/64.67 91.90/89.14
bg 79.40/68.21 78.05/66.68 78.16/66.95 78.83/67.57 73.08/61.23 93.74/89.61
pt 76.61/67.75 76.46/67.98 75.39/66.67 74.64/66.11 70.36/60.11 93.14/90.82
da 76.64/67.87 77.36/68.81 76.39/67.48 78.22/68.83 71.34/61.45 87.16/84.23
es 74.49/66.44 74.92/66.91 73.15/65.14 73.11/64.81 68.75/59.59 93.17/90.80
ca 73.83/65.13 74.24/65.57 72.39/63.72 72.03/63.02 68.23/58.15 93.98/91.64
pl 74.56/62.23 71.89/58.59 73.46/60.49 72.09/59.75 66.74/53.40 94.96/90.68
de 71.34/61.62 69.49/59.31 69.94/60.09 69.58/59.64 65.14/54.13 88.58/83.68
nl 68.55/60.26 67.88/60.11 67.88/59.46 69.55/61.55 63.31/53.79 90.59/87.52
sk 66.65/58.15 65.41/56.98 65.34/56.68 66.56/57.48 57.75/47.73 90.19/86.38
sl 68.21/56.54 66.27/54.57 66.55/54.58 67.76/55.68 60.86/48.06 86.79/82.76
cs 63.10/53.80 61.88/52.80 61.26/51.86 62.26/52.32 56.15/44.77 94.03/91.87
lv 70.78/49.30 71.43/49.59 69.04/47.80 70.56/48.53 62.33/41.42 83.67/78.13
ro 65.05/54.10 63.23/52.11 62.54/51.46 60.98/49.79 56.01/44.04 90.07/84.50
ru 60.63/51.63 59.99/50.81 59.36/50.25 60.87/51.96 55.03/45.09 94.11/92.56
fi 66.27/48.69 66.36/48.74 64.82/47.50 66.25/48.28 58.51/38.65 88.04/85.04
hr 61.91/52.86 60.09/50.67 60.58/51.07 60.80/51.12 52.92/42.19 89.66/83.81
uk 60.05/52.28 58.49/51.14 57.43/49.66 59.67/51.85 54.10/45.26 85.98/82.21
et 65.72/44.87 65.25/44.40 64.12/43.26 64.30/43.50 56.13/34.86 86.76/83.28
he 55.29/48.00 54.55/46.93 53.23/45.69 54.89/40.95 46.03/26.57 89.34/84.49
id 49.20/43.52 47.05/42.09 47.32/41.70 46.77/41.28 40.84/33.67 87.19/82.60
la 47.96/35.21 45.96/33.91 45.49/33.19 43.85/31.25 39.08/26.17 81.05/76.33
ar 38.12/28.04 32.97/25.48 32.56/23.70 32.85/24.99 32.69/22.68 86.17/81.83
hi 35.50/26.52 29.32/21.41 31.38/23.09 25.91/18.07 25.74/16.77 95.63/92.93
ko 34.48/16.40 33.66/15.40 32.75/15.04 33.11/14.25 31.39/12.70 85.05/80.76
zh* 42.48/25.10 41.53/24.32 40.56/23.32 40.92/23.45 40.03/20.97 73.62/67.67
ja* 28.18/20.91 18.41/11.99 20.72/13.19 15.16/9.32 15.39/08.41 89.06/78.74
Average 64.06/53.82 62.71/52.63 62.22/52.00 62.37/51.89 57.09/45.41 89.44/85.62
Table 2: Results (UAS%/LAS%) on the test sets. (Languages are generally sorted by average evaluation scores of all models, ‘*’ refers to results of delexicalized models.)

4.1 Setups

Settings

In our experiments, we took English as the source language and other 30 languages as the targets. In training, we only used English training and development set for parameter-updating and model-selection. In testing, we directly applied the parsing model to target languages with the inputs from target-language embeddings that were projected into the same space as the source language. We started from the 300 pre-trained embeddings from FastText333https://fasttext.cc/docs/en/crawl-vectors.html Bojanowski et al. (2017), and projected them into the same space using the offline transformation method444https://github.com/Babylonpartners/fastText_multilingual. In preliminary experiments, we also tried the projection method of Conneau et al. (2018), but found similar results. of Smith et al. (2017). We froze word embeddings throughout training and testing since fine-tuning on them might disturb the multi-lingual alignments.

For other hyper-parameters, we adopted similar ones as the Biaffine Graph Parser Dozat and Manning (2017) and Stack-Pointer Parser Ma et al. (2018). Detailed hyper-parameter settings can be found in Appendix B. Throughout our experiments, we only used the first-level UD labels since fine-grained labels might be language-dependent. We adopted a sentence length threshold of 140. For evaluation, performance was measured by unlabeled attachment score (UAS) and labeled attachment score (LAS)555Punctuations and symbols (PUNCT, SYM) are excluded.. We trained each of our cross-lingual models five times with different random seeds and reported average scores.

Systems

As described before, we have a Transformer or RNN encoder, and Graph-based or Stack-Pointer-based method for decoding. The combination gives us four different models, named in the format of “Encoder” plus “Decoder”. For example, “TransformerGraph” means the model with Transformer encoder and graph-based decoder. Also, we compare with a baseline of a shift-reduce transition-based parser, which gave previous state-of-the-art results for cross-lingual parsing Guo et al. (2015). Since they used older datasets, we re-trained the model on our datasets666We also evaluated our models on the older dataset and compared with their results, as shown in Appendix C. with their implementation777https://github.com/jiangfeng1124/acl15-clnndep. Moreover, We also list the supervised results (using our “RNNGraph” model) for each language as a reference of the upper-line of cross-lingual parsing.

4.2 Results

The results on the test sets are shown in Table 2. The languages are ordered by their average evaluation scores over all the models. In preliminary experiments, we found our lexicalized models performed poorly in Chinese (zh) and Japanese (ja). We guess that one of the reasons is that their embeddings are not well aligned to the English ones. Therefore, we use delexicalized models, where only POS tags are used as inputs. The delexicalized results888We found delexicalized models to be better only for Chinese and Japanese. For other languages, they performed worse for about 2 to 5 points. We also tried models without POS, but they were worse for about 10 points. This indicates the importance of universal POS tags in cross-lingual parsing. for Chinese and Japanese are listed in the rows marked with “*”.

Overall, the “TransformerGraph” model performs the best in over half of the languages and beats the runner-up “RNNGraph” by around 1.3 in UAS and 1.2 in LAS averagely. When compared with “RNNStack” and “TransformerStack”, the average difference is larger than 1.5 points. This shows that model that captures less word order information can generally be better at cross-lingual parsing. It is a little surprising that “TransformerStack” performs the worst, indicating that the RNN-based decoder in Stack-Pointer parser might still learn too much source-language-specific information and does not fit well with the self-attention encoder. Compared with the baseline, our better results show the importance of the contextual encoder. Compared with the supervised models, the cross-lingual results are still lower by a large gap, indicating space for improvements.

After taking a closer look, we find an interesting pattern in the results: models with RNN encoders perform better at languages that have higher evaluation scores (upper rows in the table), while for languages that are “distant” from English, “TransformerGraph” performs much better. Such a pattern corresponds to our motivation, that is, the ability to capture word ordering information can impact the performances of cross-lingual learning.

4.3 Analysis

We further analyze how different modeling choices influence the cross-lingual performance. Since we have not touched the training sets for languages other than English, to be more robust (with more data), we evaluate and analyze the results on the original training sets in this sub-section. The results on original training sets are shown in Appendix D and are similar to the ones on the test sets. For English, we use the results on the development set since its training set is exposed to learning. Also, because of possible problems in word embeddings, we use the results of delexicalized models for Chinese and Japanese.

4.3.1 The Overall Pattern

Retake a look at our motivation: we hope that relaxing the ability to capture word ordering information can make the model better at cross-lingual transfer parsing where the target languages can have various word orders to the source language. Now we can analyze this point with the word-ordering distance.

Figure 3: Evaluation score differences for encoding and decoding modules. Languages are sorted by their word-ordering distances to English from left to right.

Using the word-ordering vectors described in Section 2, we can measure the distances between languages. Since every entry in the word-ordering vector represents the frequency of a binary value, we use Manhattan distance. We refer to this distance measure as word-ordering distance. With word-ordering distances, we can further analyze the relation between word ordering and cross-lingual transferability. For each target language, we have two types of distances when comparing it to English: one is the word-ordering distance, the other is the performance distance, which is the gap of evaluation scores999Unless otherwise noted, we simply average UAS and LAS for evaluation scores. between the target language and English. The performance distance can represent the general transferability from English to this language. We calculate the Pearson correlation of these two distances on all the concerned languages, and the results turn to be quite high: the correlation is around 0.9 using the evaluations of any of our four cross-lingual transfer models. This again suggests that word order is indeed an essential factor of cross-lingual transferability.

(a) Adposition & Noun (ADP, NOUN, case)
(b) Adjective & Noun (ADJ, NOUN, amod)
(c) Auxiliary & Verb (AUX, VERB, aux)
(d) Object & Verb (NOUN, VERB, obj)
Figure 8: Comparisons on specific dependency types. The blue and orange curves and left -axis represent the differences in evaluation scores, the brown curve and right -axis represents the relative frequency of left-direction (modifier before head) on this type. The languages (-axis) are sorted by this relative frequency. We use the triplets of “(ModifierPOS, HeadPOS, DependencyLabel)” to represent the augmented dependency types.

Furthermore, we separately analyze the encoding and decoding modules since we have two architecture choices for both of them. When examining one module, we take the maximum evaluation scores of all the architectures in the other module. For example, when comparing RNN and Transformer encoders, we take the best evaluation scores of “RNNGraph” and “RNNStack” for RNN and the best of “TransformerGraph” and “TransformerStack” for Transformer. Figure 3 shows the score differences of encoding and decoding architectures against the languages which are sorted by their word-ordering distances to English. For both the encoding and decoding module, we observe a similar overall pattern: the architectures that are less sensitive to word ordering (Transformer for encoding and Graph-based method for decoding) in general performs better than their alternatives in the languages that are further from the source language English. On the other hand, some languages that are closer to English, models with RNNs (RNN encoder and Stack-Pointer decoder) perform better, possibly benefiting from being able to capture similar word ordering information.

4.3.2 On Dependency Types

Moreover, we compare the results on specific dependency types for concrete examples.101010We also provide some detailed comparisons on Czech in Appendix F as the example of a specific language. For a specific type, we sort the languages by the relative frequencies of left-direction (modifier before head) and plot the evaluation differences for encoding and decoding modules. Figure 8 shows four typical example types: Adposition and Noun, Adjective and Noun, Auxiliary and Verb, and Object and Verb. In Figure (a)a, we examine the “case” dependencies between adpositions and nouns. The pattern is similar to the overall pattern, for the languages which mainly use prepositions as in English, different models perform similarly. For the languages which use postpositions (Japanese, Hindi, Finnish, Estonian), the models that capture less word ordering information get better results. The patterns of adjective modifier (Figure (b)b) and auxiliary (Figure (c)c) are also similar.

On dependencies between object nouns and verbs, although models that are more flexible on word order, in general, perform better, the pattern diverges from what we expect. There can be several possible explanations for this. Firstly, the tokens which are noun objects of verbs only take about 3.1% averagely over all tokens. Also, if considering just this specific dependency type, the correlation between frequency distances and performance differences is 0.64, which is far less than 0.9 when considering all types. Therefore, although Verb-Object ordering is a typical example, we may not take it as the whole story of word order. Secondly, Verb-Object dependencies can usually be the difficult types to decide. They sometimes can be long-ranged and have complex interactions with other words. Therefore, merely modeling less ordering information may have complicated effects. Moreover, although our relatively-positional Transformer does not explicitly encode word positions, it may still capture specific positional information with relative distances. For example, the words in the middle of the sentence can have different distance patterns to the words at the beginning or the end. With this knowledge, the model can still prefer the pattern where a verb is in the middle as in English’s Subject-Verb-Object ordering and may find sentences in Subject-Object-Verb languages strange. It will be interesting to explore more to weaken or remove this bias.

4.3.3 On Dependency Distances

We further analyze dependency lengths and directions. Here, we combine dependency length and direction as dependency distance , by using negative signs for dependencies with left-direction (modifier before head) and positive for right-direction (head before modifier). We use the average of UAS and LAS as evaluation score and show the model performances (recalls on gold distance) in Figure 9. The pattern looks strange at first glance: on dependency distances =1, all transfer models performs quite bad. It might be explained by the relative frequencies of dependency distances as shown in Table 3. There is a discrepancy between English and the average one at =1. About 80% of the dependencies whose length are 1 in English is in the left direction (modifier before head), while overall other languages have a more right direction at =1. Since the models are only trained in English, they might be less confident in predicting =1 dependencies.

-5 -5 -4 -3 -2 -1 1 2 3 4 5 5
English 2.74 1.49 3.11 7.02 15.45 31.55 7.51 9.84 6.61 4.21 2.68 7.78
Average 4.36 1.44 2.41 4.72 11.83 30.42 14.22 10.49 6.19 3.71 2.34 7.86
Table 3: Relative frequencies (%) of dependency distances, where English differs from the average values at =1.

We further compare our models on the =1 dependencies and as shown in Figure 10, the familiar pattern appears again. The models that capture less word ordering information perform better at the languages which have more =1 dependencies. Such finding indicates that our model design of relaxing model’s ability to capture word order information can help on short-ranged dependencies of different directions to the source language. However, since the model has access to the local context information, which is quite important for parsing as we will show, it might still be able to guess specific word ordering or positional information; thus the improvements are limited. One of the most challenging parts of unsupervised cross-lingual parsing: modeling more cross-lingual and shareable features, but less language-specific information, that is, we want a flexible yet powerful model. Our effor through self-attention models can be the first step on this.

Figure 9: Performance (recall) on gold dependency distances (length + direction). Negative numbers stands for left-direction (modifier before head).
Figure 10: Evaluation differences of models on =1 dependencies. Annotations are the same as in Figure 8, languages are sorted by percentages (represented by the brown curve and right -axis) of =1 dependencies.

4.3.4 Positional Representations in Transformer

In our Transformer encoder, we use a modified version of relative position representations instead of the absolute positional embeddings. Table 4 shows the comparisons of these choices. We use graph-based method for decoding, and report evaluation scores averaged on all languages. Firstly, we can see that positional information is a key component, without which the model (“No-Positional”) performs quite bad. It is natural since, without positional representations, the model does not have access to the locality of contexts. The model with absolute positional embeddings (“Absolute”) also does not perform as good, and maybe for parsing, relative positional representations can be more direct and easy to learn. We also try the model with directional distance information (“Relative+dir”) as in Shaw et al. (2018), in which distances are augmented with directions (by adding negative or positive signs to distinguish left and right contexts). This model performs well, and interestingly, its performance is close to the one with RNN encoder. Finally, by discarding the information of directions, our relatively positional Transformer performs the best, indicating that it can capture useful cross-language context information while depending less on language-specific positional and ordering information.

Model UAS% LAS%
Relative 64.88 54.51
Relative+dir 64.22 53.97
Absolute 62.12 52.14
No-Position 29.17 22.54
Table 4: Comparisons of positional representations in Transformer encoder. “Relative” means our distance-based relative position representation, “+dir” means distinguishing directions for relative positions, “Absolute” is the strategy of using absolute positional embeddings, “No-Position” means no positional information.

5 Related Work

Cross-language transfer learning employing deep neural networks has widely been studied in the areas of natural language processing

Ma and Xia (2014); Guo et al. (2015); Kim et al. (2017); Kann et al. (2017); Cotterell and Duh (2017), speech recognition Xu et al. (2014); Huang et al. (2013), and information retrieval Vulić and Moens (2015); Sasaki et al. (2018); Litschko et al. (2018). Learning the language structure (e.g., morphology, syntax) and transferring the knowledge from the source language to the target language is the main underneath challenge, and has been thoroughly investigated for a wide variety of NLP applications, including sequence tagging Yang et al. (2016); Buys and Botha (2016)

, name entity recognition

Xie et al. (2018), dependency parsing Tiedemann (2015); Agić et al. (2014), entity coreference resolution and linking Kundu et al. (2018); Sil et al. (2017).

Existing works in zero-shot cross-lingual dependency parsing, in general, train a dependency parser on the source language and then directly run on the target languages. Training of the monolingual parsers are often delexicalized, i.e., removing all lexical features from the source treebank Zeman and Resnik (2008); McDonald et al. (2013b), and the underlying feature model is selected from a shared part-of-speech (POS) representation utilizing the Universal POS Tagset Petrov et al. (2012). Another pool of prior works improve the delexicalized approaches by adapting the model to fit the target languages better. Cross-lingual approaches that facilitate the usage of lexical features includes choosing the source language data points suitable for the target language Søgaard (2011); Täckström et al. (2013), transferring from multiple sources McDonald et al. (2011); Guo et al. (2016), using cross-lingual word clusters Täckström et al. (2012) and lexicon mapping Xiao and Guo (2014); Guo et al. (2015).

Although the objective of harnessing dependency parsers with neural networks is learning transferable representations of the lexical units, what type of neural architectures are suitable for cross-lingual transfer is never studied. A recent work Lakew et al. (2018) investigates the pros and cons of Transformer and recurrent neural networks on multilingual machine translation, but the impact of language differences on neural cross-language learning is still unraveled. In this work, we solemnly focus on dissecting the two preeminent neural architectures for dependency parsing with a specific goal of unveiling the transferability of a language feature, word order typology and present their advantages and limitations.

6 Conclusion

In this work, we present a comprehensive study of how the choice of neural architectures affects cross-lingual transfer learning. We thoroughly examine the strengths and weaknesses of two notable design of neural architectures (sequential RNN vs. self-attentional Transformer) using dependency parsing as our evaluation task. We show that the self-attention model performs better than the sequential one overall, especially when there is a significant difference in the word order typology between the target and source language. However, when the source and target languages are close in word ordering, sequential models can be better. We also investigate the impact of decoding methods of parsing in the cross-lingual setting. The empirical findings suggest that for cross-lingual transfer learning, which model is the more suitable one depends on the language structural differences (like word ordering) between the source and target languages. Thus, we should take those into consideration in our model designs. In future work, we plan to employ prior linguistic knowledge into the models for better cross-lingual transferring.

References

  • Agić et al. (2014) Željko Agić, Jörg Tiedemann, Kaja Dobrovoljc, Simon Krek, Danijela Merkler, and Sara Može. 2014. Cross-lingual dependency parsing of related languages with rich morphosyntactic tagsets. In EMNLP 2014 Workshop on Language Technology for Closely Related Languages and Language Variants.
  • Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5:135–146.
  • Buys and Botha (2016) Jan Buys and Jan A Botha. 2016. Cross-lingual morphological tagging for low-resource languages. arXiv preprint arXiv:1606.04279 .
  • Conneau et al. (2018) Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. ICLR .
  • Cotterell and Duh (2017) Ryan Cotterell and Kevin Duh. 2017. Low-resource named entity recognition with cross-lingual, character-level neural conditional random fields. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers). volume 2, pages 91–96.
  • Dozat and Manning (2017) Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency parsing. ICLR .
  • Dryer (2007) Matthew S Dryer. 2007. Word order. Language typology and syntactic description 1:61–131.
  • Eisner (1996) Jason M Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th conference on Computational linguistics-Volume 1. Association for Computational Linguistics, pages 340–345.
  • Gamallo et al. (2012) Pablo Gamallo, Marcos Garcia, and Santiago Fernández-Lanza. 2012. Dependency-based open information extraction. In

    Proceedings of the joint workshop on unsupervised and semi-supervised learning in NLP

    . Association for Computational Linguistics, pages 10–18.
  • Guo et al. (2015) Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015.

    Cross-lingual dependency parsing based on distributed representations.

    In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). volume 1, pages 1234–1244.
  • Guo et al. (2016) Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2016. A representation learning framework for multi-source transfer parsing.
  • Hashimoto et al. (2016) Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2016. A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587 .
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
  • Huang et al. (2013) Jui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, and Yifan Gong. 2013. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, pages 7304–7308.
  • Jie et al. (2017) Zhanming Jie, Aldrian Obaja Muis, and Wei Lu. 2017. Efficient dependency-guided named entity recognition. In AAAI. pages 3457–3465.
  • Kann et al. (2017) Katharina Kann, Ryan Cotterell, and Hinrich Schütze. 2017. One-shot neural cross-lingual transfer for paradigm completion. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics page 1993–2003.
  • Kim et al. (2017) Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for pos tagging without cross-lingual resources. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2832–2838.
  • Kiperwasser and Goldberg (2016) Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. arXiv preprint arXiv:1603.04351 .
  • Kitaev and Klein (2018) Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. arXiv preprint arXiv:1805.01052 .
  • Kundu et al. (2018) Gourab Kundu, Avirup Sil, Radu Florian, and Wael Hamza. 2018. Neural cross-lingual coreference resolution and its application to entity linking. arXiv preprint arXiv:1806.10201 .
  • Lakew et al. (2018) Surafel M Lakew, Mauro Cettolo, and Marcello Federico. 2018. A comparison of transformer and recurrent neural networks on multilingual neural machine translation. arXiv preprint arXiv:1806.06957 .
  • Litschko et al. (2018) Robert Litschko, Goran Glavaš, Simone Paolo Ponzetto, and Ivan Vulić. 2018. Unsupervised cross-lingual information retrieval using monolingual data only. arXiv preprint arXiv:1805.00879 .
  • Liu et al. (2018) Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. Internation Conference on Learning Representations .
  • Ma et al. (2018) Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stack-pointer networks for dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
  • Ma and Xia (2014) Xuezhe Ma and Fei Xia. 2014. Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization. In Proceedings of ACL 2014. Baltimore, Maryland, pages 1337–1348.
  • Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne.

    Journal of machine learning research

    9(Nov):2579–2605.
  • McCann et al. (2017) Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems. pages 6294–6305.
  • McClosky et al. (2011) David McClosky, Mihai Surdeanu, and Christopher D Manning. 2011. Event extraction as dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 1626–1635.
  • McDonald et al. (2005) Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL-2005. Ann Arbor, Michigan, USA, pages 91–98.
  • McDonald and Nivre (2007) Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL).
  • McDonald et al. (2013a) Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, Claudia Bedini, Núria Bertomeu Castelló, and Jungmee Lee. 2013a. Universal dependency annotation for multilingual parsing. In Proceedings of ACL-2013. Sofia, Bulgaria, pages 92–97.
  • McDonald et al. (2013b) Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, et al. 2013b. Universal dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). volume 2, pages 92–97.
  • McDonald et al. (2011) Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, pages 62–72.
  • Nivre et al. (2018) Joakim Nivre, Mitchell Abrams, Željko Agić, and et al. 2018. Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
  • Östling (2015) Robert Östling. 2015. Word order typology through multilingual word alignment. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). volume 2, pages 205–211.
  • Peters et al. (2018) Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365 .
  • Petrov et al. (2012) Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of LREC-2012. Istanbul, Turkey, pages 2089–2096.
  • Sasaki et al. (2018) Shota Sasaki, Shuo Sun, Shigehiko Schamoni, Kevin Duh, and Kentaro Inui. 2018. Cross-lingual learning-to-rank with shared representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). volume 2, pages 458–463.
  • Shaw et al. (2018) Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155 .
  • Sil et al. (2017) Avirup Sil, Gourab Kundu, Radu Florian, and Wael Hamza. 2017. Neural cross-lingual entity linking. arXiv preprint arXiv:1712.01813 .
  • Smith et al. (2017) Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. ICLR .
  • Søgaard (2011) Anders Søgaard. 2011. Data point selection for cross-language adaptation of dependency parsers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2. Association for Computational Linguistics, pages 682–686.
  • Täckström et al. (2013) Oscar Täckström, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers .
  • Täckström et al. (2012) Oscar Täckström, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 conference of the North American chapter of the association for computational linguistics: Human language technologies. Association for Computational Linguistics, pages 477–487.
  • Tiedemann (2015) Jörg Tiedemann. 2015. Cross-lingual dependency parsing with universal dependencies and predicted pos labels. In Proceedings of the Third International Conference on Dependency Linguistics (Depling 2015). pages 340–349.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. pages 5998–6008.
  • Vulić and Moens (2015) Ivan Vulić and Marie-Francine Moens. 2015. Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval. ACM, pages 363–372.
  • Xiao and Guo (2014) Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning. pages 119–129.
  • Xie et al. (2018) Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A Smith, and Jaime Carbonell. 2018. Neural cross-lingual named entity recognition with minimal resources. arXiv preprint arXiv:1808.09861 .
  • Xu et al. (2014) Yong Xu, Jun Du, Li-Rong Dai, and Chin-Hui Lee. 2014. Cross-language transfer learning for deep neural network based speech enhancement. In Chinese Spoken Language Processing (ISCSLP), 2014 9th International Symposium on. IEEE, pages 336–340.
  • Yang et al. (2016) Zhilin Yang, Ruslan Salakhutdinov, and William Cohen. 2016. Multi-task cross-lingual sequence tagging from scratch. arXiv preprint arXiv:1603.06270 .
  • Zeman and Resnik (2008) Daniel Zeman and Philip Resnik. 2008. Cross-language parser adaptation between related languages. In Proceedings of the IJCNLP-08 Workshop on NLP for Less Privileged Languages.
  • Zhou et al. (2016) Joey Tianyi Zhou, Sinno Jialin Pan, Ivor W Tsang, and Shen-Shyang Ho. 2016. Transfer learning for cross-language text categorization through active correspondences construction. In AAAI. pages 2400–2406.
  • Zoph et al. (2016) Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016.

    Transfer learning for low-resource neural machine translation.

    In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1568–1575.

Appendices

A Details of UD Treebanks

The statistics of the Universal Dependency treebanks we used are summarized in Table 5.

Language Lang. Family Treebank #Sent. #Token(w/o punct)
Arabic (ar) Afro-Asiatic PADT train 6075 223881(206041)
dev 909 30239(27339)
test 680 28264(26171)
Bulgarian (bg) IE.Slavic BTB train 8907 124336(106813)
dev 1115 16089(13822)
test 1116 15724(13456)
Catalan (ca) IE.Romance AnCora train 13123 417587(371981)
dev 1709 56482(50452)
test 1846 57902(51459)
Chinese (zh) Sino-Tibetan GSD train 3997 98608(84988)
dev 500 12663(10890)
test 500 12012(10321)
Croatian (hr) IE.Slavic SET train 6983 154055(135206)
dev 849 19543(17211)
test 1057 23446(20622)
Czech (cs) IE.Slavic PDT,CAC, CLTT,FicTree train 102993 1806230(1542805)
dev 11311 191679(163387)
test 12203 205597(174771)
Danish (da) IE.Germanic DDT train 4383 80378(69219)
dev 564 10332(8951)
test 565 10023(8573)
Dutch (nl) IE.Germanic Alpino, LassySmall train 18058 261180(228902)
dev 1394 22938(19645)
test 1472 22622(19734)
English (en) IE.Germanic EWT train 12543 204585(180303)
dev 2002 25148(21995)
test 2077 25096(21898)
Estonian (et) Uralic EDT train 20827 287859(240496)
dev 2633 37219(30937)
test 2737 41273(34837)
Finnish (fi) Uralic TDT train 12217 162621(138324)
dev 1364 18290(15631)
test 1555 21041(17908)
French (fr) IE.Romance GSD train 14554 356638(316780)
dev 1478 35768(31896)
test 416 10020(8795)
German (de) IE.Germanic GSD train 13814 263804(229338)
dev 799 12486(10809)
test 977 16498(14132)
Hebrew (he) Afro-Asiatic HTB train 5241 137680(122122)
dev 484 11408(10050)
test 491 12281(10895)
Hindi (hi) IE.Indic HDTB train 13304 281057(262389)
dev 1659 35217(32850)
test 1684 35430(33010)
Indonesian (id) Austronesian GSD train 4477 97531(82617)
dev 559 12612(10634)
test 557 11780(10026)
Italian (it) IE.Romance ISDT train 13121 276019(244632)
dev 564 11908(10490)
test 482 10417(9237)
Japanese (ja) Japanese GSD train 7164 161900(144045)
dev 511 11556(10326)
test 557 12615(11258)
Korean (ko) Korean GSD, Kaist train 27410 353133(312481)
dev 3016 37236(32770)
test 3276 40043(35286)
Latin (la) IE.Latin PROIEL train 15906 171928(171928)
dev 1234 13939(13939)
test 1260 14091(14091)
Latvian (lv) IE.Baltic LVTB train 5424 80666(66270)
dev 1051 14585(11487)
test 1228 15073(11846)
Norwegian (no) IE.Germanic Bokmaal, Nynorsk train 29870 489217(432597)
dev 4300 67619(59784)
test 3450 54739(48588)
Polish (pl) IE.Slavic LFG, SZ train 19874 167251(136504)
dev 2772 23367(19144)
test 2827 23920(19590)
Portuguese (pt) IE.Romance Bosque, GSD train 17993 462494(400343)
dev 1770 42980(37244)
test 1681 41697(36100)
Romanian (ro) IE.Romance RRT train 8043 185113(161429)
dev 752 17074(14851)
test 729 16324(14241)
Russian (ru) IE.Slavic SynTagRus train 48814 870474(711647)
dev 6584 118487(95740)
test 6491 117329(95799)
Slovak (sk) IE.Slavic SNK train 8483 80575(65042)
dev 1060 12440(10641)
test 1061 13028(11208)
Slovenian (sl) IE.Slavic SSJ, SST train 8556 132003(116730)
dev 734 14063(12271)
test 1898 24092(22017)
Spanish (es) IE.Romance GSD, AnCora train 28492 827053(730062)
dev 3054 89487(78951)
test 2147 64617(56973)
Swedish (sv) IE.Germanic Talbanken train 4303 66645(59268)
dev 504 9797(8825)
test 1219 20377(18272)
Ukrainian (uk) IE.Slavic IU train 4513 75098(60976)
dev 577 10371(8381)
test 783 14939(12246)
Table 5: Statistics of the UD Treebanks we used. For language family, “IE” is the abbreviation for Indo-European. “(w/o) punct” means the numbers of the tokens excluding “PUNCT” and “SYM”.

B Hyper-Parameters

Table 6 summarizes the hyper-parameters that we used in our experiments. Most of them are similar to those in Dozat and Manning (2017) and Ma et al. (2018).

Model Layer Hyper-Parameter Value
RNN Encoder encoder layer 3
encoder size 300
MLP arc MLP size 512
label MLP size 128
Training Dropout 0.33
optimizer Adam
learning rate 0.001
batch size 32
Transformer Encoder encoder layer 6
300
512
MLP arc MLP size 512
label MLP size 128
Training Dropout 0.2
optimizer Adam
learning rate 0.0001
batch size 80
Table 6: Hyper-parameters in our experiments.

C Results on Google Universal Dependency Treebanks v2.0

We also ran our models on Google Universal Dependency Treebanks v2.0 McDonald et al. (2013a), which is an older dataset that was used by Guo et al. (2015). The results show that our models perform better consistently.

Language TransformerGraph RNNGraph TransformerStack RNNStack Guo et al. (2015)
German 65.03/55.03 64.60/54.57 63.63/54.40 65.51/55.82 60.35/51.54
French 74.45/63.28 76.75/65.20 73.63/62.76 75.13/64.44 72.93/63.12
Spanish 72.00/61.50 73.99/63.46 71.73/61.42 74.13/64.00 71.90/62.28
Table 7: Comparisons (UAS%/LAS%) on Google Universal Dependency Treebanks v2.0.

D Results on the original training sets

Language TransformerGraph RNNGraph TransformerStack RNNStack
en 99.83/99.78 99.78/99.71 99.71/99.65 99.13/98.88
it 80.37/75.48 80.89/75.99 79.15/74.17 79.05/73.91
no 80.72/72.45 80.59/72.41 80.06/71.60 81.46/72.75
fr 79.31/74.73 79.99/75.52 78.62/74.02 76.84/72.22
sv 80.07/71.91 80.42/72.39 79.45/71.28 80.87/72.25
bg 80.25/68.88 78.39/67.03 79.19/67.66 79.66/68.22
pt 77.06/69.33 77.33/69.91 75.84/68.22 75.39/67.75
da 75.75/67.12 75.95/67.41 75.18/66.55 76.98/67.50
es 73.91/66.48 74.39/67.03 72.84/65.38 72.46/64.78
ca 74.40/65.73 74.94/66.21 73.01/64.42 72.75/63.68
sk 76.07/62.75 74.67/61.15 75.93/61.97 75.37/60.94
pl 75.32/63.26 73.12/59.76 74.28/61.46 73.21/61.02
nl 68.98/60.00 68.37/59.52 68.22/59.02 69.16/60.11
sl 69.13/58.92 67.35/56.87 67.74/57.08 68.95/58.26
de 67.23/58.27 66.64/57.48 66.10/56.89 65.88/56.63
uk 65.70/57.48 64.77/56.40 64.10/55.83 65.82/57.13
lv 71.69/50.43 72.48/50.85 70.24/48.97 71.60/49.56
ro 65.31/54.22 63.17/52.16 63.03/51.95 61.78/50.52
cs 63.04/53.92 61.75/52.91 61.11/51.91 62.21/52.48
hr 61.57/52.40 59.74/50.37 59.94/50.43 60.44/50.68
ru 60.50/51.35 59.55/50.17 59.01/49.71 60.71/51.57
et 66.63/45.58 65.78/45.01 64.94/44.04 65.06/44.33
fi 64.64/46.21 64.63/46.22 63.07/44.82 64.74/46.09
he 58.32/49.80 57.75/49.07 56.36/47.62 58.79/43.83
id 47.92/41.93 45.07/39.91 46.23/40.16 45.62/39.67
la 49.04/35.48 47.12/34.36 46.78/33.56 45.26/31.97
ar 38.74/28.24 33.66/25.44 34.25/24.69 33.31/24.86
hi 36.01/27.24 29.59/21.75 32.02/23.79 26.37/18.56
ko 34.62/15.14 33.91/14.16 32.70/13.77 32.95/13.14
zh* 41.05/23.85 40.11/23.02 39.49/22.68 39.89/22.49
ja* 28.19/21.74 18.23/12.68 20.53/13.78 15.21/10.37
Average 64.88/54.51 63.55/53.31 63.19/52.81 63.12/52.45
Table 8: Results (average UAS%/LAS% over 5 runs) on the original training sets. (Languages are generally sorted by average evaluation scores of all models, ‘*’ refers to results of delexicalized models.)

E Details about augmented dependency types

Type Avg. Freq. (%) #Lang. Type Avg. Freq. (%) #Lang.
(ADP, NOUN, case) 7.47 31 (PROPN, VERB, nsubj) 0.81 30
(PUNCT, VERB, punct) 6.91 30 (PRON, VERB, obj) 0.77 30
(NOUN, NOUN, nmod) 4.97 31 (NOUN, ROOT, root) 0.66 31
(ADJ, NOUN, amod) 4.92 31 (VERB, VERB, xcomp) 0.61 28
(DET, NOUN, det) 4.69 30 (VERB, VERB, ccomp) 0.60 30
(VERB, ROOT, root) 4.31 31 (ADP, PRON, case) 0.57 29
(NOUN, VERB, obl) 3.96 30 (AUX, NOUN, cop) 0.57 28
(NOUN, VERB, obj) 3.10 31 (ADV, ADJ, advmod) 0.54 29
(NOUN, VERB, nsubj) 2.89 31 (AUX, ADJ, cop) 0.50 27
(PUNCT, NOUN, punct) 2.75 30 (PROPN, VERB, obl) 0.48 29
(ADV, VERB, advmod) 2.43 31 (PRON, VERB, obl) 0.44 30
(AUX, VERB, aux) 2.29 28 (ADV, NOUN, advmod) 0.41 28
(PRON, VERB, nsubj) 1.53 30 (ADJ, ROOT, root) 0.39 29
(ADP, PROPN, case) 1.46 29 (PRON, NOUN, nmod) 0.39 22
(NOUN, NOUN, conj) 1.32 30 (NOUN, ADJ, obl) 0.37 25
(VERB, NOUN, acl) 1.31 31 (PROPN, PROPN, conj) 0.35 29
(SCONJ, VERB, mark) 1.27 28 (NOUN, ADJ, nsubj) 0.35 30
(CCONJ, VERB, cc) 1.18 30 (CCONJ, ADJ, cc) 0.29 28
(PROPN, NOUN, nmod) 1.14 30 (PUNCT, NUM, punct) 0.26 24
(CCONJ, NOUN, cc) 1.13 30 (NOUN, NOUN, nsubj) 0.25 31
(NUM, NOUN, nummod) 1.11 31 (ADJ, ADJ, conj) 0.25 26
(PROPN, PROPN, flat) 1.09 26 (CCONJ, PROPN, cc) 0.22 26
(VERB, VERB, conj) 1.05 30 (PRON, VERB, iobj) 0.21 21
(PUNCT, PROPN, punct) 0.94 29 (ADV, ADV, advmod) 0.19 21
(VERB, VERB, advcl) 0.89 30 (NOUN, NOUN, appos) 0.18 23
(PUNCT, ADJ, punct) 0.89 30 (PROPN, VERB, obj) 0.17 24
Table 9: Selected augmented dependency types sorted by their average frequencies. “#Lang.” denotes in how many languages the specific type appears. Our selecting criterion is “”.

F Results on specific dependency types for Czech

In table 10, we show results of Czech on some dependency types with evaluation breakdowns on dependency directions. We specifically select Czech mainly for two reasons: (1) It has the largest dataset; (2) Czech is famous for relatively flexible word order. Generally, we can see that models that are more flexible on word ordering perform better. Interestingly, for objective and subjective types, we can see that LAS scores for all models are quite low even when the correct heads are predicted. The reason might be that even the relative-positional Transformer can capture some positional information which further reveals word ordering information in some way.

(ADP, NOUN, case): (mod-first% in English is 99.92%.)
Direction Percentage TransformerGraph RNNGraph TransformerStack RNNStack
mod-first 99.99% 75.34/75.34 74.62/74.61 74.46/74.43 74.17/74.08
head-first 0.01%
all 100.00% 75.33/75.33 74.61/74.61 74.45/74.43 74.17/74.07
(NOUN, NOUN, nmod): (mod-first% in English is 4.72%.)
Direction Percentage TransformerGraph RNNGraph TransformerStack RNNStack
mod-first 0.97%
head-first 99.03% 21.38/17.85 18.55/16.20 20.49/16.61 22.51/19.16
all 100.00% 21.64/17.68 18.86/16.05 20.77/16.45 22.78/18.98
(ADJ, NOUN, amod): (mod-first% in English is 99.01%.)
Direction Percentage TransformerGraph RNNGraph TransformerStack RNNStack
mod-first 92.99% 88.93/88.92 89.42/89.41 85.39/85.21 87.26/86.37
head-first 7.01% 41.80/37.03 36.52/32.36 34.82/27.19 40.59/19.85
all 100.00% 85.63/85.29 85.72/85.41 81.85/81.14 83.98/81.71
(NOUN, VERB, obl): (mod-first% in English is 9.62%.)
Direction Percentage TransformerGraph RNNGraph TransformerStack RNNStack
mod-first 37.80% 48.84/40.33 46.39/38.49 48.75/41.08 50.16/41.64
head-first 62.20% 62.81/55.97 60.38/53.41 62.22/55.37 61.73/55.32
all 100.00% 57.53/50.06 55.09/47.77 57.13/49.97 57.36/50.15
(NOUN, VERB, obj): (mod-first% in English is 0.72%.)
Direction Percentage TransformerGraph RNNGraph TransformerStack RNNStack
mod-first 20.65% 55.56/0.64 53.75/0.46 54.08/0.37 60.34/0.18
head-first 79.35% 73.18/65.24 71.30/62.28 72.12/63.81 72.76/64.65
all 100.00% 69.54/51.90 67.68/49.52 68.39/50.71 70.20/51.34
(NOUN, VERB, nsubj): (mod-first% in English is 85.07%.)
Direction Percentage TransformerGraph RNNGraph TransformerStack RNNStack
mod-first 60.22% 61.42/58.33 58.12/54.51 60.88/58.24 60.67/58.98
head-first 39.78% 64.07/3.83 62.93/3.18 62.38/2.97 59.94/4.42
all 100.00% 62.47/36.65 60.03/34.09 61.48/36.25 60.38/37.28
(ADV, VERB, advmod): (mod-first% in English is 58.82%.)
Direction Percentage TransformerGraph RNNGraph TransformerStack RNNStack
mod-first 70.15% 88.23/87.49 86.43/85.48 86.65/85.30 86.64/83.72
head-first 29.85% 65.79/65.28 65.02/64.33 65.33/64.35 61.93/60.53
all 100.00% 81.53/80.86 80.04/79.17 80.29/79.05 79.26/76.80
(AUX, VERB, aux): (mod-first% in English is 99.64%.)
Direction Percentage TransformerGraph RNNGraph TransformerStack RNNStack
mod-first 83.71% 88.78/88.19 84.44/83.52 89.03/86.59 82.54/76.33
head-first 16.29% 68.18/65.28 54.59/50.87 63.96/54.02 56.67/20.24
all 100.00% 85.42/84.46 79.57/78.20 84.94/81.28 78.32/67.19
(VERB, VERB, advcl): (mod-first% in English is 31.02%.)
Direction Percentage TransformerGraph RNNGraph TransformerStack RNNStack
mod-first 41.75% 57.51/55.61 56.98/55.60 57.54/55.03 54.74/51.66
head-first 58.25% 71.52/56.68 67.39/56.08 67.27/54.17 65.93/54.13
all 100.00% 65.67/56.23 63.04/55.88 63.21/54.53 61.26/53.10
Table 10: Evaluation breakdowns (UAS%/LAS%) on dependency directions for Czech on some specific dependency types. “mod-first” means the dependency edges whose modifier is before head, “head-first” means the opposite, and “span” indicates including all the tokens between modifier and head words. “–” replaces results that are unstable because of rare appearance (below 1%).