Attention-Informed Mixed-Language Training for Zero-shot Cross-lingual Task-oriented Dialogue Systems

11/21/2019 ∙ by Zihan Liu, et al. ∙ The Hong Kong University of Science and Technology 0

Recently, data-driven task-oriented dialogue systems have achieved promising performance in English. However, developing dialogue systems that support low-resource languages remains a long-standing challenge due to the absence of high-quality data. In order to circumvent the expensive and time-consuming data collection, we introduce Attention-Informed Mixed-Language Training (MLT), a novel zero-shot adaptation method for cross-lingual task-oriented dialogue systems. It leverages very few task-related parallel word pairs to generate code-switching sentences for learning the inter-lingual semantics across languages. Instead of manually selecting the word pairs, we propose to extract source words based on the scores computed by the attention layer of a trained English task-related model and then generate word pairs using existing bilingual dictionaries. Furthermore, intensive experiments with different cross-lingual embeddings demonstrate the effectiveness of our approach. Finally, with very few word pairs, our model achieves significant zero-shot adaptation performance improvements in both cross-lingual dialogue state tracking and natural language understanding (i.e., intent detection and slot filling) tasks compared to the current state-of-the-art approaches, which utilize a much larger amount of bilingual data.



There are no comments yet.


page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


Over the past few years, the demand of task-oriented dialogue systems has increased rapidly across the world, following their promising performance on English systems [21, 18]. However, most dialogue systems are unable to support numerous low-resource languages due to the scarcity of high-quality data, which will eventually create a massive gap between the performance of low-resource language systems (e.g., Thai) and high-resource systems (e.g., English). A common straightforward strategy to address this problem is to collect more data and train each monolingual dialogue system separately, but it is costly and resource-intensive to collect new data on every single language.

Zero-shot adaptation is an effective approach to circumvent the data collection process when there is no training data available by transferring the learned knowledge from a high-resource source language to low-resource target languages. Currently, a few studies have been performed on the zero-shot learning in task-oriented dialogue systems [2, 16]. However, there are two problems that exist in this research: (1) the existing methods require a sufficient parallel corpus, which is not ideal for training models on rare languages where bilingual resources are minimal, and (2) the imperfect alignments of cross-lingual embeddings such as MUSE [3] as well as the enormous cross-lingual models XLM [10], and Multilingual BERT [5] limit the cross-lingual zero-shot transferability.

Figure 1: Illustration of the mixed-language training (MLT) approach and zero-shot transfer. EN denotes an English text, IT denotes an Italian text, and CS

denotes a code-switching text (i.e., a mixed-language sentence). In the training step, code-switching sentence generator will replace the task-related word with its corresponding translation in the target language to generate code-switching sentences. In the zero-shot transfer step, we leverage cross-lingual word embeddings and directly adapt the trained attention model to the target language.

To address these problems, we propose the attention-informed mixed-language training (MLT), a new framework that leverages extremely small number of bilingual word pairs to build zero-shot cross-lingual task-oriented dialogue systems. The word pairs are created by choosing words from the English training data using attention scores from a trained English model. Then we pair these English words with target words using existing bilingual dictionaries, and use the target words to replace keywords in the training data and build code-switching sentences.111“code-switching” is interchangeable with “mixed-language”. The intuition behind training with code-switching sentences is to help the model to identify selected important keywords as well as their semantically similar keywords in the target language. In addition, we incorporate the MUSE, RCSLS [7], and cross-lingual language models XLM and Multilingual BERT for generating cross-lingual embeddings.

During the training phase, our model learns to capture important keywords in code-switching sentences mixed with source and target language words. We conjuncture that learning with task-related keywords of the target language helps the model to capture other task-related words that have similar semantics, for example, synonyms or words in the same category such as days of the week “Domingo” (Sunday) and “Lunes” (Monday). During the zero-shot testing phase, the inter-lingual understanding learned by the model alleviates the main issue of the imperfect alignment of cross-lingual embeddings. The experimental results on unseen languages show that MLT outperforms existing baselines with significant margins in both dialogue state tracking and natural language understanding tasks on all languages using many fewer resources. This proves that our approach is effective for application to low-resource languages when there is only limited parallel data available.222The code is available at:

Contributions in our work are summarized as follows:

  • We investigate the extremely low bilingual resources setting for zero-shot cross-lingual task-oriented dialogue systems.

  • Our approach achieves state-of-the-art zero-shot cross-lingual performance in both dialogue state tracking and natural language understanding of task-oriented dialogue systems using many fewer bilingual resources.

  • We study the performance of current cross-lingual pre-trained language models (namely Multilingual BERT and XLM) on zero-shot cross-lingual dialogue systems, and conduct quantitative analyses while adapting them to cross-lingual dialogue systems.

Related Work

Task-oriented Dialogue Systems

Dialogue state tracking (DST) and natural language understanding (NLU) are the key components for understanding user inputs and building dialogue systems.

Dialogue State Tracking

mrkvsic2017neural mrkvsic2017neural proposed to utilize pre-trained word vectors by composing them into a distributed representation of user utterances and to resolve morphological ambiguity. zhong2018global zhong2018global successfully improved rare slot values tracking through slot-specific local modules.

Natural Language Understanding

liu2016attention liu2016attention leveraged an attention mechanism to learn where to pay attention in the input sequences for joint intent detection and the slot filling task. goo2018slot goo2018slot introduced slot-gated models to learn the relationship between intent and slot attention vectors and better captured the semantics of user utterances and queries.

Multilingual Task-oriented Dialogue Systems

A number of multilingual task-oriented dialogue systems datasets have been published lately [13, 16], enabling evaluation of the approaches for cross-lingual dialogue systems. mrkvsic2017semantic mrkvsic2017semantic annotated two languages (namely German and Italian) for the dialogue state tracking dataset WOZ 2.0 [12] and trained a unified framework to cope with multiple languages. Meanwhile, schuster2019cross schuster2019cross introduced a multilingual NLU dataset and highlighted the need for more sophisticated cross-lingual methods.

Figure 2: Dialogue State Tracking Model (left) and Natural Language Understanding Model (right). For each model, we apply an attention layer to learn important task-related words.

Cross-lingual Transfer Learning

Cross-lingual transfer learning, which aims to discover the underlying connections between the source and target language, has become a popular topic recently. conneau2017word conneau2017word proposed to use zero supervision signals to conduct cross-lingual word embedding mapping and achieved promising results. devlin2019bert,lample2019cross devlin2019bert,lample2019cross leveraged large monolingual and bilingual corpus to align cross-lingual sentence-level representations and achieved the state-of-the-art performance in many cross-lingual tasks. Recently, studies have applied cross-lingual transfer algorithms to natural language processing tasks, such as

named entity recognition (NER) [14], entity linking [15], POS tagging [8, 20], and dialogue systems [2, 17, 11]. Nevertheless, to the best of our knowledge, only a few studies have focused on task-oriented dialogue systems, and none of them investigated the extremely low bilingual resources scenario.

Mixed-Language Training

As shown in Figure 1, in the mixed-language training step, our model is trained using code-switching sentences generated from source language sentences by replacing the selected source words with their translations. In the zero-shot test step, our model directly transfers into the unseen target language.

Attention-based Selection

Intuitively, the attention layer in a trained model can focus on the keywords that are related to the task. As shown in Figure 1, we propose to utilize the scores computed from the attention layer of a trained model on source language (English) data to select keywords for completing the task. Concretely, we first collect source words by taking the top-1 attention score for each source utterance since the source words with the highest attention score are the most important for the given task. However, some noisy words (unimportant words) might still exist in the collection. Hence, we first count the times that the words are selected and filter the words that are seldom selected, and then we choose the top- most frequent words in the training set as our final word candidates and pair them using an existing bilingual dictionary. We denote the selected word pairs as a key-value dictionary , where and represent the source and target language, respectively.

Training and Adaptation

Given a source language sentence , we replace the words in with their corresponding target words if they are present in to generate a code-switching sentence . As illustrated in Figure 1, we use cross-lingual word embeddings for source and target language words.


where CS represents the code-switching sentence generator in Figure 1, AttnModel represents the attention model, and denotes cross-lingual word embeddings. We specifically use cross-lingual word embeddings from MUSE [3] and RCSLS [7], aligned representations of source and target languages to transfer the learned knowledge from the source language to the target language. By applying mixed-language training, our model can cope with the problem of imperfect alignment of cross-lingual word embeddings. In the zero-shot test step, the attention layer is still able to focus on the same or semantically similar target language keywords, as it does in the mixed-language training step, which improves the robustness of cross-lingual transferability.

Cross-lingual Dialogue Systems

In this section, we focus on applying our mixed-language training approach to cross-lingual task-oriented dialogue systems. We design model architectures for dialogue state tracking and natural language understanding (i.e., intent detection and slot filling) as follows.

Dialogue State Tracking

Our dialogue state tracking (DST) model, illustrated in Figure 2, is modified from chen2018xl chen2018xl. We model DST into a classification problem based on three inputs: (i) the user utterance , (ii) the slot candidate , and (iii) the system dialogue acts 333 represents the system request, and and represent the system confirmation. For example, when the system requests more information by asking “Do you have an area preference?”, then = “area”, or when the system confirms by saying “The Vietnamese food is in the cheap price range,” then = “price range” and = “cheap”., where we use subscript to denote each dialogue turn. In short, our model can be decomposed into the following three components:

Utterance Encoder

We use a bi-directional LSTM (BiLSTM) to encode the user utterance and an attention mechanism [6] on top of the BiLSTM to generate an utterance representation , where is the word vector of the -th token and is the length of the utterance. We formalize the utterance encoder as:


where is a trainable weight vector in the attention layer, and is the attention score of each token .

Context Gate

Given a candidate slot and system acts as inputs, we compute the context gate by summing three individual gates: (i) the candidate gate (), (ii) the request gate (), and (iii) the confirm gate (). The context gate is defined as follows:


where denotes the word embedding look-up table, denotes a Hadamard product, and represent trainable parameter matrices, and

represents a sigmoid function.

Slot Value Prediction

Finally, we concatenate the utterance representation () and the context gate (), which are then passed into a linear layer

and a softmax layer for prediction.

Natural Language Understanding

Our NLU model is illustrated in Figure 2 as a multi-task problem. We describe our model as follows:

Slot Filling

We use a BiLSTM-CRF combining a BiLSTM with a conditional random field (CRF) sequence labeling model [9]

for slot prediction. We pass the hidden states of the BiLSTM through a softmax layer and then pass the resulting label probability vectors through the CRF layer for computing final predictions.

Intent Prediction

We place an attention layer over the hidden states of the BiLSTM and predict the intent for the user utterance through a softmax projection layer. The attention layer is similar to the one in the dialogue state tracking shown in equation (4).

Figure 3: Illustration of how we leverage a transformer encoder to incorporate subword embeddings into word-level representations. The parameters in the transformer encoder are shared for all subword embeddings.
Model slot acc. joint goal acc. request acc.
MUSE 60.69 68.58 71.38 21.57 30.61 36.51 74.22 80.11 82.99
XLM (MLM) 52.21 66.26 68.25 14.09 29.45 31.29 75.15 78.48 80.22
 + Transformer 53.81 65.81 68.55 13.97 30.87 32.98 76.83 78.95 81.34
XLM (MLM+TLM) 58.04 65.39 66.25 16.34 29.22 29.83 75.73 78.86 79.12
 + Transformer 56.52 66.81 68.88 16.59 31.76 33.12 78.56 81.59 82.96
Multi. BERT 57.61 67.49 69.48 14.95 30.69 32.23 75.31 83.66 86.27
 + Transformer 57.43 68.33 70.77 15.67 31.28 34.36 78.59 84.37 86.97
Ontology Matching 24 - 21
Translate Train 41 - 42
Bilingual Dictionary 51.74 28.07 72.54
Bilingual Corpus 55 30.84 68.32
Supervised Training 85.78 78.89 84.02
Model slot acc. joint goal acc. request acc.
MUSE 60.59 73.55 76.88 20.66 36.88 39.35 79.09 82.24 84.23
Multi. BERT 53.34 65.49 69.48 12.88 26.45 31.41 76.12 84.58 85.18
 + Transformer 54.56 66.87 71.45 12.63 28.59 33.35 77.34 82.93 84.96
Ontology Matching 23 - 21
Translate Train 48 - 51
Bilingual Dictionary 73 39.01 77.09
Bilingual Corpus 72 41.23 81.23
Supervised Training 88.92 80.22 91.05
Table 1: Zero-shot results for the target languages on Multilingual WOZ 2.0. MLT denotes our approach (attention-informed MLT), which utilizes the same number of word pairs as MLT (90 word pairs). denotes the results of XL-NBT. Note that, we realize that the goal accuracy in chen2018xl chen2018xl is calculated as slot accuracy in our paper, so we rerun the models using the provided code ( to calculate joint goal accuracy. denotes the results from chen2018xl chen2018xl. Instead of using the transformer encoder, we sum the subword embeddings based on the word boundaries to get word-level representations. Due to the absence of the Italian language in the XLM models, we cannot report the results.

Cross-lingual Language Model

We investigate the effectiveness of current powerful cross-lingual pre-trained language models XLM and Multilingual BERT, and deploy MLT into them for the zero-shot cross-lingual DST and NLU tasks. lample2019cross lample2019cross proposed cross-lingual language model pre-training (XLM) and two objective functions masked language modeling (MLM) and translation language modeling (TLM). The MLM leveraged a monolingual corpus, the TLM utilized a bilingual corpus, and MLM+TLM incorporated both MLM and TLM. Pre-trained XLM models on 15 languages are publicly available.444 Multilingual BERT is trained on the monolingual corpora of 104 languages, and the model is also publicly available.555

In order to handle multiple languages and reduce the vocabulary size, both methods leverage subword units to tokenize each sentence. However, the outputs of the DST and NLU tasks depend on the word-level information. Hence, we propose to learn the mapping between the subword-level and word-level by adding a transformer encoder [4] on top of subword units and learn to encode them into word-level embeddings, which we describe in Figure 3. After that, we leverage the same model structures as illustrated in Figure 2 for the DST and NLU tasks.



Dialogue State Tracking

Wizard of Oz (WOZ), a restaurant domain dataset, is used for training and evaluating dialogue state tracking models on English. It was enlarged into WOZ 2.0 by adding more dialogues, and recently, mrkvsic2017semantic mrkvsic2017semantic expanded WOZ 2.0 into Multilingual WOZ 2.0 by including two more languages (German and Italian). Multilingual WOZ 2.0 contains 1200 dialogues for each language, where 600 dialogues are used for training, 200 for validation, and 400 for testing. The corpus contains three goal-tracking slot types: food, price range and area, and a request slot type. The model has to track the value for each goal-tracking slot and request slot.

Natural Language Understanding

Recently, a multilingual task-oriented natural language understanding dialogue dataset was proposed by schuster2019cross schuster2019cross, which contains English, Spanish, and Thai across three domains (alarm, reminder, and weather). The corpus includes 12 intent types and 11 slot types, and the model has to detect the intent of the user utterance and conduct slot filling for each word of the utterance.

Spanish Thai
Model Intent acc. Slot F1 Intent acc. Slot F1
RCSLS 37.67 77.59 87.05 22.23 59.12 57.75 35.12 68.63 81.44 8.72 29.44 30.42
XLM (MLM) 60.8 75.11 83.95 38.55 63.29 66.11 37.59 46.34 65.31 8.12 19.03 20.43
 + Transformer 62.33 82.83 85.63 41.67 66.53 67.95 40.31 57.27 68.55 11.45 26.02 27.45
XLM (TLM+MLM) 62.48 81.34 84.91 42.27 65.71 66.48 31.62 50.34 65.25 7.91 19.22 19.88
 + Transformer 65.32 83.79 87.48 44.39 66.03 68.55 37.53 68.62 72.59 12.84 26.56 27.98
Multi. BERT 73.73 77.51 86.54 51.73 74.51 74.43 28.15 52.25 70.57 10.62 24.41 28.47
 + Transformer 74.15 82.9 87.88 54.28 74.88 73.89 26.54 53.84 73.46 11.34 26.05 27.12
Zero-shot SLU 46.64 15.41 35.64 12.11
Multi. CoVe 53.34 22.50 66.35 32.52
Multi. CoVe w/ auto 53.89 19.25 70.70 35.62
Translate Train 85.39 72.87 95.85 55.43
Table 2: Results on multilingual NLU dataset [16], and the number of word pairs on both MLT and MLT is 20. We implemented the model [17] and tested it on the same dataset.

Experimental Setup

We explore two training settings: (1) without Mixed-language Training (BASE), and (2) Mixed-language Training (MLT). The former trains models only using English data, and then we directly transfer to the target language by leveraging the same cross-lingual word embeddings as our model. The latter utilizes code-switching sentences as the train data. We evaluate our model with cross-lingual embeddings: MUSE [3], RCSLS [7], XLM [10], and Multilingual BERT (Multi. BERT) [5].

We describe our baselines for the dialogue state tracking task in the following:

Ontology-based Word Selection (MLT)

We use dialogue ontology word pairs for mixed-language training since ontology words are all task-related and essential for the DST task.


chen2018xl chen2018xl proposed a teacher-student framework for cross-lingual neural belief tracking (i.e., dialogue state tracking) by leveraging a bilingual corpus or bilingual dictionary. The model learns to generate close representations for semantically similar sentences across languages.

Ontology Matching

chen2018xl chen2018xl directly used exact string matching for the user utterance according to the ontology words to discover the slot value for each slot type.

Translate Train

chen2018xl chen2018xl used an external bilingual corpus to train a machine translation system, which translates English dialogue training data into target languages (German and Italian) as “annotated” data to supervise the training of DST systems in target languages.

Supervised Training

We assume the existence of annotated data for the target languages dialogues state tracking. It indicates the upper bound of the DST model.

We describe our baselines for the natural language understanding task in the following:

Human-based Word Selection (MLT)

Due to the absence of ontology in the NLU task, we crowd-source the top-20 task-related source words in the English training set.

Zero-shot SLU

upadhyay2018almost upadhyay2018almost used cross-lingual word embeddings [1] to conduct zero-shot transfer learning in the NLU task.

Multi. CoVe

schuster2019cross schuster2019cross used Multilingual CoVe [19] to encode phrases with similar meanings into similar vector spaces across languages.

Multi. CoVe w/ auto.

Based on Multilingual CoVe, schuster2019cross schuster2019cross added an autoencoder objective to produce more general representations for semantically similar sentences across languages.

Translate Train

schuster2019cross schuster2019cross trained a machine translation system using a bilingual corpus, and then translated English NLU data into the target languages (Spanish and Thai) for supervised training.

Evaluation Metrics

Dialogue State Tracking

We use joint goal accuracy and slot accuracy to evaluate the model performance on goal-tracking slots. The joint goal accuracy compares the predicted dialogue states to the ground truth at each dialogue turn, and the prediction is correct if and only if the predicted values for all slots exactly match the ground truth values. While the slot accuracy individually compares each slot-value pair to its ground truth. We use request accuracy to evaluate the model performance on “request” slot. Similar to joint goal accuracy, the prediction is correct if and only if all the user requests for information are correctly identified.

Natural Language Understanding

We use accuracy and BIO-based f1-score to evaluate the performance of intent prediction and slot filling, respectively.

Figure 4: The dynamics of the NLU task: intent and slot-filling results with different numbers of word pairs on Spanish test data using RCSLS. The words are decided according to the frequency in the source language (English) training set. We evaluate on all test data for (a) and (b). For (c) and (d), we only evaluate on filtered test data that do not contain any word pairs.
Figure 5: Attentions on words in both training and testing phases. A darker color shows a higher attention score and importance.

Results & Discussion

Quantitative Analysis

The DST and NLU results are shown in Table 1 and 2. In most cases, our models using MLT significantly outperform the existing state-of-the-art zero-shot baselines, and we achieve a comparable result to the Multi. CoVe w/ auto on Thai. Notably, our models achieve impressive performance since we only use a few word pairs and many fewer bilingual resources than sophisticated models such as Multi. Cove or Bilingual Corpus.

We observe that ontology matching is an intuitive method to attempt zero-shot in low-resource languages. However, this method is ineffective because it does not seem able to detect synonyms or paraphrases. Applying ontology pairs into the MLT models copes with this problem and outperforms the BASE models with vast improvements. Interestingly, MLT consistently outperforms MLT because the attention-based selection mechanism is not only capturing important ontology keywords but also keywords which are not listed in the ontology (i.e., synonyms or paraphrases to the ontology words). For example, word “moderate” is interchangeable with “fair” when users describe the food price during the conversation, which is not listed in the ontology. Since we do not have an ontology in the NLU task, we compare our results with human crowd-sourcing-based word selection (MLT). Results show that MLT significantly outperforms human word pairs selection MLT in the intent detection, which further proves the high quality of words selected by the attention layer.

Due to the imperfect alignment of cross-lingual word embeddings, our BASE models with MUSE or RCSLS still suffer from low performance in the zero-shot adaptation. Although we replace these cross-lingual word embeddings with large pre-trained language models such as XLM and Multi. BERT, the performance is not consistently better. This is because the quality of alignment degrades when we combine subword-based embeddings into word-level representations. The performance of the XLM-based models and Multi. BERT-based models are improved remarkably by applying MLT. Surprisingly, MLT-based models with RCSLS surpass XLM and Multi. BERT by a substantial margin on the Thai language. We find that the length of Thai subword sequences is approximately twice as long as other languages. Hence, the quality of subword-to-word alignments degrades severely.

Performance vs. Number of Word Pairs

Figure 3(a) and 3(b) compare the performance of intent and slot-filing predictions on Spanish data with respect to the number of word pairs, and investigates the gap between human crowd-sourcing-based word selection (MLT) and attention-based word selection (MLT). Interestingly, with only five word pairs, MLT achieves notable gains of 17.69% and 21.45% in intent prediction and slot filling performance, respectively, compared to the BASE model. Compared with human word pairs selection MLT, in the intent prediction, MLT beats the performance of human-based word selection, and in slot-filling prediction, the result is on par with the MLT.

Model Transferability

In Figure 3(c) and 3(d), we show the transferability of MLT on the target language data that does not have any target keywords selected from the word pair list. Our model with MLT is still able to achieve impressive gains on both intent and slot-filling performance on these data. The results emphasize that the MLT-based model not only memorizes target word replacements, but captures the generic semantics of words and learns to generalize to other words that have a similar vector space, for example, the synonyms “configurer” and “establecer” (both mean “set” in English) or word from the same domain, like “Domingo” (Sunday) and “Lunes” (Monday).

To further support our claims, we extract the attention scores from the attention layer and elaborate on the findings. Figure 5 displays that, in the training phase, our model puts attentions on parallel task-related words in both the source and target languages, such as “Set” and “alarm” in English, and “Configurar” and “alarma” in Spanish. In the zero-shot test phase, our attention layer in the MLT-based models puts an attention on identical or synonym words because they have the same or similar vector representations, respectively, but without MLT, our attention layer fails to do so. Interestingly, we can see clearly in Figure 5 that word ‘Establecer” is as equally important as “Configurer”, although “Establecer” is not found in the code-switching sentence.


We propose attention-informed mixed-language training (MLT), a novel zero-shot adaptation method for cross-lingual task-oriented dialogue systems using code-switching sentences. Our approach utilizes very few task-related parallel word pairs based on the attention layer and has a better generalization to words that have similar semantics in the target language. The visualization of the attention layer confirms this. Experimental results show that MLT-based models outperform existing zero-shot adaptation approaches in dialogue state tracking and natural language understanding with many fewer resources.


  • [1] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov (2017) Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5, pp. 135–146. Cited by: Zero-shot SLU.
  • [2] W. Chen, J. Chen, Y. Su, X. Wang, D. Yu, X. Yan, and W. Y. Wang (2018) XL-nbt: a cross-lingual neural belief tracking framework. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 414–424. Cited by: Introduction, Cross-lingual Transfer Learning.
  • [3] A. Conneau, G. Lample, M. Ranzato, L. Denoyer, and H. Jégou (2018) Word translation without parallel data. In International Conference on Learning Representations (ICLR), Cited by: Introduction, Training and Adaptation, Experimental Setup.
  • [4] M. Dehghani, S. Gouws, O. Vinyals, J. Uszkoreit, and L. Kaiser (2019) Universal transformers. In International Conference on Learning Representations, External Links: Link Cited by: Cross-lingual Language Model.
  • [5] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Cited by: Introduction, Experimental Setup.
  • [6] B. Felbo, A. Mislove, A. Søgaard, I. Rahwan, and S. Lehmann (2017) Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1615–1625. Cited by: Utterance Encoder.
  • [7] A. Joulin, P. Bojanowski, T. Mikolov, H. Jégou, and E. Grave (2018) Loss in translation: learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2979–2984. Cited by: Introduction, Training and Adaptation, Experimental Setup.
  • [8] J. Kim, Y. Kim, R. Sarikaya, and E. Fosler-Lussier (2017) Cross-lingual transfer learning for pos tagging without cross-lingual resources. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2832–2838. Cited by: Cross-lingual Transfer Learning.
  • [9] G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, and C. Dyer (2016) Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Cited by: Slot Filling.
  • [10] G. Lample and A. Conneau (2019) Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291. Cited by: Introduction, Experimental Setup.
  • [11] Z. Liu, J. Shin, Y. Xu, G. I. Winata, P. Xu, A. Madotto, and P. Fung (2019) Zero-shot cross-lingual dialogue systems with transferable latent variables. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1297–1303. Cited by: Cross-lingual Transfer Learning.
  • [12] N. Mrkšić, D. Ó. Séaghdha, T. Wen, B. Thomson, and S. Young (2017) Neural belief tracker: data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1777–1788. Cited by: Multilingual Task-oriented Dialogue Systems.
  • [13] N. Mrkšić, I. Vulić, D. Ó. Séaghdha, I. Leviant, R. Reichart, M. Gašić, A. Korhonen, and S. Young (2017) Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the Association for Computational Linguistics 5 (1), pp. 309–324. Cited by: Multilingual Task-oriented Dialogue Systems.
  • [14] J. Ni, G. Dinu, and R. Florian (2017) Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1470–1480. Cited by: Cross-lingual Transfer Learning.
  • [15] X. Pan, B. Zhang, J. May, J. Nothman, K. Knight, and H. Ji (2017) Cross-lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1, pp. 1946–1958. Cited by: Cross-lingual Transfer Learning.
  • [16] S. Schuster, S. Gupta, R. Shah, and M. Lewis (2019) Cross-lingual transfer learning for multilingual task oriented dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 3795–3805. Cited by: Introduction, Multilingual Task-oriented Dialogue Systems, Table 2.
  • [17] S. Upadhyay, M. Faruqui, G. Tür, H. Dilek, and L. Heck (2018) (Almost) zero-shot cross-lingual spoken language understanding. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6034–6038. Cited by: Cross-lingual Transfer Learning, Table 2.
  • [18] C. Wu, A. Madotto, E. Hosseini-Asl, C. Xiong, R. Socher, and P. Fung (2019) Transferable multi-domain state generator for task-oriented dialogue systems. arXiv preprint arXiv:1905.08743. Cited by: Introduction.
  • [19] K. Yu, H. Li, and B. Oguz (2018) Multilingual seq2seq training with similarity loss for cross-lingual document classification. In Proceedings of The Third Workshop on Representation Learning for NLP, pp. 175–179. Cited by: Multi. CoVe.
  • [20] Y. Zhang, D. Gaddy, R. Barzilay, and T. Jaakkola (2016) Ten pairs to tag–multilingual pos tagging via coarse mapping between embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1307–1317. Cited by: Cross-lingual Transfer Learning.
  • [21] V. Zhong, C. Xiong, and R. Socher (2018) Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1458–1467. Cited by: Introduction.