Over the past few years, the demand of task-oriented dialogue systems has increased rapidly across the world, following their promising performance on English systems [21, 18]. However, most dialogue systems are unable to support numerous low-resource languages due to the scarcity of high-quality data, which will eventually create a massive gap between the performance of low-resource language systems (e.g., Thai) and high-resource systems (e.g., English). A common straightforward strategy to address this problem is to collect more data and train each monolingual dialogue system separately, but it is costly and resource-intensive to collect new data on every single language.
Zero-shot adaptation is an effective approach to circumvent the data collection process when there is no training data available by transferring the learned knowledge from a high-resource source language to low-resource target languages. Currently, a few studies have been performed on the zero-shot learning in task-oriented dialogue systems [2, 16]. However, there are two problems that exist in this research: (1) the existing methods require a sufficient parallel corpus, which is not ideal for training models on rare languages where bilingual resources are minimal, and (2) the imperfect alignments of cross-lingual embeddings such as MUSE  as well as the enormous cross-lingual models XLM , and Multilingual BERT  limit the cross-lingual zero-shot transferability.
To address these problems, we propose the attention-informed mixed-language training (MLT), a new framework that leverages extremely small number of bilingual word pairs to build zero-shot cross-lingual task-oriented dialogue systems. The word pairs are created by choosing words from the English training data using attention scores from a trained English model. Then we pair these English words with target words using existing bilingual dictionaries, and use the target words to replace keywords in the training data and build code-switching sentences.111“code-switching” is interchangeable with “mixed-language”. The intuition behind training with code-switching sentences is to help the model to identify selected important keywords as well as their semantically similar keywords in the target language. In addition, we incorporate the MUSE, RCSLS , and cross-lingual language models XLM and Multilingual BERT for generating cross-lingual embeddings.
During the training phase, our model learns to capture important keywords in code-switching sentences mixed with source and target language words. We conjuncture that learning with task-related keywords of the target language helps the model to capture other task-related words that have similar semantics, for example, synonyms or words in the same category such as days of the week “Domingo” (Sunday) and “Lunes” (Monday). During the zero-shot testing phase, the inter-lingual understanding learned by the model alleviates the main issue of the imperfect alignment of cross-lingual embeddings. The experimental results on unseen languages show that MLT outperforms existing baselines with significant margins in both dialogue state tracking and natural language understanding tasks on all languages using many fewer resources. This proves that our approach is effective for application to low-resource languages when there is only limited parallel data available.222The code is available at: https://github.com/zliucr/mixed-language-training
Contributions in our work are summarized as follows:
We investigate the extremely low bilingual resources setting for zero-shot cross-lingual task-oriented dialogue systems.
Our approach achieves state-of-the-art zero-shot cross-lingual performance in both dialogue state tracking and natural language understanding of task-oriented dialogue systems using many fewer bilingual resources.
We study the performance of current cross-lingual pre-trained language models (namely Multilingual BERT and XLM) on zero-shot cross-lingual dialogue systems, and conduct quantitative analyses while adapting them to cross-lingual dialogue systems.
Task-oriented Dialogue Systems
Dialogue state tracking (DST) and natural language understanding (NLU) are the key components for understanding user inputs and building dialogue systems.
Dialogue State Tracking
mrkvsic2017neural mrkvsic2017neural proposed to utilize pre-trained word vectors by composing them into a distributed representation of user utterances and to resolve morphological ambiguity. zhong2018global zhong2018global successfully improved rare slot values tracking through slot-specific local modules.
Natural Language Understanding
liu2016attention liu2016attention leveraged an attention mechanism to learn where to pay attention in the input sequences for joint intent detection and the slot filling task. goo2018slot goo2018slot introduced slot-gated models to learn the relationship between intent and slot attention vectors and better captured the semantics of user utterances and queries.
Multilingual Task-oriented Dialogue Systems
A number of multilingual task-oriented dialogue systems datasets have been published lately [13, 16], enabling evaluation of the approaches for cross-lingual dialogue systems. mrkvsic2017semantic mrkvsic2017semantic annotated two languages (namely German and Italian) for the dialogue state tracking dataset WOZ 2.0  and trained a unified framework to cope with multiple languages. Meanwhile, schuster2019cross schuster2019cross introduced a multilingual NLU dataset and highlighted the need for more sophisticated cross-lingual methods.
Cross-lingual Transfer Learning
Cross-lingual transfer learning, which aims to discover the underlying connections between the source and target language, has become a popular topic recently. conneau2017word conneau2017word proposed to use zero supervision signals to conduct cross-lingual word embedding mapping and achieved promising results. devlin2019bert,lample2019cross devlin2019bert,lample2019cross leveraged large monolingual and bilingual corpus to align cross-lingual sentence-level representations and achieved the state-of-the-art performance in many cross-lingual tasks. Recently, studies have applied cross-lingual transfer algorithms to natural language processing tasks, such asnamed entity recognition (NER) , entity linking , POS tagging [8, 20], and dialogue systems [2, 17, 11]. Nevertheless, to the best of our knowledge, only a few studies have focused on task-oriented dialogue systems, and none of them investigated the extremely low bilingual resources scenario.
As shown in Figure 1, in the mixed-language training step, our model is trained using code-switching sentences generated from source language sentences by replacing the selected source words with their translations. In the zero-shot test step, our model directly transfers into the unseen target language.
Intuitively, the attention layer in a trained model can focus on the keywords that are related to the task. As shown in Figure 1, we propose to utilize the scores computed from the attention layer of a trained model on source language (English) data to select keywords for completing the task. Concretely, we first collect source words by taking the top-1 attention score for each source utterance since the source words with the highest attention score are the most important for the given task. However, some noisy words (unimportant words) might still exist in the collection. Hence, we first count the times that the words are selected and filter the words that are seldom selected, and then we choose the top- most frequent words in the training set as our final word candidates and pair them using an existing bilingual dictionary. We denote the selected word pairs as a key-value dictionary , where and represent the source and target language, respectively.
Training and Adaptation
Given a source language sentence , we replace the words in with their corresponding target words if they are present in to generate a code-switching sentence . As illustrated in Figure 1, we use cross-lingual word embeddings for source and target language words.
where CS represents the code-switching sentence generator in Figure 1, AttnModel represents the attention model, and denotes cross-lingual word embeddings. We specifically use cross-lingual word embeddings from MUSE  and RCSLS , aligned representations of source and target languages to transfer the learned knowledge from the source language to the target language. By applying mixed-language training, our model can cope with the problem of imperfect alignment of cross-lingual word embeddings. In the zero-shot test step, the attention layer is still able to focus on the same or semantically similar target language keywords, as it does in the mixed-language training step, which improves the robustness of cross-lingual transferability.
Cross-lingual Dialogue Systems
In this section, we focus on applying our mixed-language training approach to cross-lingual task-oriented dialogue systems. We design model architectures for dialogue state tracking and natural language understanding (i.e., intent detection and slot filling) as follows.
Dialogue State Tracking
Our dialogue state tracking (DST) model, illustrated in Figure 2, is modified from chen2018xl chen2018xl. We model DST into a classification problem based on three inputs: (i) the user utterance , (ii) the slot candidate , and (iii) the system dialogue acts 333 represents the system request, and and represent the system confirmation. For example, when the system requests more information by asking “Do you have an area preference?”, then = “area”, or when the system confirms by saying “The Vietnamese food is in the cheap price range,” then = “price range” and = “cheap”., where we use subscript to denote each dialogue turn. In short, our model can be decomposed into the following three components:
We use a bi-directional LSTM (BiLSTM) to encode the user utterance and an attention mechanism  on top of the BiLSTM to generate an utterance representation , where is the word vector of the -th token and is the length of the utterance. We formalize the utterance encoder as:
where is a trainable weight vector in the attention layer, and is the attention score of each token .
Given a candidate slot and system acts as inputs, we compute the context gate by summing three individual gates: (i) the candidate gate (), (ii) the request gate (), and (iii) the confirm gate (). The context gate is defined as follows:
where denotes the word embedding look-up table, denotes a Hadamard product, and represent trainable parameter matrices, and
represents a sigmoid function.
Slot Value Prediction
Finally, we concatenate the utterance representation () and the context gate (), which are then passed into a linear layer
and a softmax layer for prediction.
Natural Language Understanding
Our NLU model is illustrated in Figure 2 as a multi-task problem. We describe our model as follows:
We place an attention layer over the hidden states of the BiLSTM and predict the intent for the user utterance through a softmax projection layer. The attention layer is similar to the one in the dialogue state tracking shown in equation (4).
|Model||slot acc.||joint goal acc.||request acc.|
|Model||slot acc.||joint goal acc.||request acc.|
Cross-lingual Language Model
We investigate the effectiveness of current powerful cross-lingual pre-trained language models XLM and Multilingual BERT, and deploy MLT into them for the zero-shot cross-lingual DST and NLU tasks. lample2019cross lample2019cross proposed cross-lingual language model pre-training (XLM) and two objective functions masked language modeling (MLM) and translation language modeling (TLM). The MLM leveraged a monolingual corpus, the TLM utilized a bilingual corpus, and MLM+TLM incorporated both MLM and TLM. Pre-trained XLM models on 15 languages are publicly available.444https://github.com/facebookresearch/XLM Multilingual BERT is trained on the monolingual corpora of 104 languages, and the model is also publicly available.555https://github.com/google-research/bert/blob/master/multi- lingual.md
In order to handle multiple languages and reduce the vocabulary size, both methods leverage subword units to tokenize each sentence. However, the outputs of the DST and NLU tasks depend on the word-level information. Hence, we propose to learn the mapping between the subword-level and word-level by adding a transformer encoder  on top of subword units and learn to encode them into word-level embeddings, which we describe in Figure 3. After that, we leverage the same model structures as illustrated in Figure 2 for the DST and NLU tasks.
Dialogue State Tracking
Wizard of Oz (WOZ), a restaurant domain dataset, is used for training and evaluating dialogue state tracking models on English. It was enlarged into WOZ 2.0 by adding more dialogues, and recently, mrkvsic2017semantic mrkvsic2017semantic expanded WOZ 2.0 into Multilingual WOZ 2.0 by including two more languages (German and Italian). Multilingual WOZ 2.0 contains 1200 dialogues for each language, where 600 dialogues are used for training, 200 for validation, and 400 for testing. The corpus contains three goal-tracking slot types: food, price range and area, and a request slot type. The model has to track the value for each goal-tracking slot and request slot.
Natural Language Understanding
Recently, a multilingual task-oriented natural language understanding dialogue dataset was proposed by schuster2019cross schuster2019cross, which contains English, Spanish, and Thai across three domains (alarm, reminder, and weather). The corpus includes 12 intent types and 11 slot types, and the model has to detect the intent of the user utterance and conduct slot filling for each word of the utterance.
|Model||Intent acc.||Slot F1||Intent acc.||Slot F1|
|Multi. CoVe w/ auto||53.89||19.25||70.70||35.62|
We explore two training settings: (1) without Mixed-language Training (BASE), and (2) Mixed-language Training (MLT). The former trains models only using English data, and then we directly transfer to the target language by leveraging the same cross-lingual word embeddings as our model. The latter utilizes code-switching sentences as the train data. We evaluate our model with cross-lingual embeddings: MUSE , RCSLS , XLM , and Multilingual BERT (Multi. BERT) .
We describe our baselines for the dialogue state tracking task in the following:
Ontology-based Word Selection (MLT)
We use dialogue ontology word pairs for mixed-language training since ontology words are all task-related and essential for the DST task.
chen2018xl chen2018xl proposed a teacher-student framework for cross-lingual neural belief tracking (i.e., dialogue state tracking) by leveraging a bilingual corpus or bilingual dictionary. The model learns to generate close representations for semantically similar sentences across languages.
chen2018xl chen2018xl directly used exact string matching for the user utterance according to the ontology words to discover the slot value for each slot type.
chen2018xl chen2018xl used an external bilingual corpus to train a machine translation system, which translates English dialogue training data into target languages (German and Italian) as “annotated” data to supervise the training of DST systems in target languages.
We assume the existence of annotated data for the target languages dialogues state tracking. It indicates the upper bound of the DST model.
We describe our baselines for the natural language understanding task in the following:
Human-based Word Selection (MLT)
Due to the absence of ontology in the NLU task, we crowd-source the top-20 task-related source words in the English training set.
upadhyay2018almost upadhyay2018almost used cross-lingual word embeddings  to conduct zero-shot transfer learning in the NLU task.
schuster2019cross schuster2019cross used Multilingual CoVe  to encode phrases with similar meanings into similar vector spaces across languages.
Multi. CoVe w/ auto.
Based on Multilingual CoVe, schuster2019cross schuster2019cross added an autoencoder objective to produce more general representations for semantically similar sentences across languages.
schuster2019cross schuster2019cross trained a machine translation system using a bilingual corpus, and then translated English NLU data into the target languages (Spanish and Thai) for supervised training.
Dialogue State Tracking
We use joint goal accuracy and slot accuracy to evaluate the model performance on goal-tracking slots. The joint goal accuracy compares the predicted dialogue states to the ground truth at each dialogue turn, and the prediction is correct if and only if the predicted values for all slots exactly match the ground truth values. While the slot accuracy individually compares each slot-value pair to its ground truth. We use request accuracy to evaluate the model performance on “request” slot. Similar to joint goal accuracy, the prediction is correct if and only if all the user requests for information are correctly identified.
Natural Language Understanding
We use accuracy and BIO-based f1-score to evaluate the performance of intent prediction and slot filling, respectively.
Results & Discussion
The DST and NLU results are shown in Table 1 and 2. In most cases, our models using MLT significantly outperform the existing state-of-the-art zero-shot baselines, and we achieve a comparable result to the Multi. CoVe w/ auto on Thai. Notably, our models achieve impressive performance since we only use a few word pairs and many fewer bilingual resources than sophisticated models such as Multi. Cove or Bilingual Corpus.
We observe that ontology matching is an intuitive method to attempt zero-shot in low-resource languages. However, this method is ineffective because it does not seem able to detect synonyms or paraphrases. Applying ontology pairs into the MLT models copes with this problem and outperforms the BASE models with vast improvements. Interestingly, MLT consistently outperforms MLT because the attention-based selection mechanism is not only capturing important ontology keywords but also keywords which are not listed in the ontology (i.e., synonyms or paraphrases to the ontology words). For example, word “moderate” is interchangeable with “fair” when users describe the food price during the conversation, which is not listed in the ontology. Since we do not have an ontology in the NLU task, we compare our results with human crowd-sourcing-based word selection (MLT). Results show that MLT significantly outperforms human word pairs selection MLT in the intent detection, which further proves the high quality of words selected by the attention layer.
Due to the imperfect alignment of cross-lingual word embeddings, our BASE models with MUSE or RCSLS still suffer from low performance in the zero-shot adaptation. Although we replace these cross-lingual word embeddings with large pre-trained language models such as XLM and Multi. BERT, the performance is not consistently better. This is because the quality of alignment degrades when we combine subword-based embeddings into word-level representations. The performance of the XLM-based models and Multi. BERT-based models are improved remarkably by applying MLT. Surprisingly, MLT-based models with RCSLS surpass XLM and Multi. BERT by a substantial margin on the Thai language. We find that the length of Thai subword sequences is approximately twice as long as other languages. Hence, the quality of subword-to-word alignments degrades severely.
Performance vs. Number of Word Pairs
Figure 3(a) and 3(b) compare the performance of intent and slot-filing predictions on Spanish data with respect to the number of word pairs, and investigates the gap between human crowd-sourcing-based word selection (MLT) and attention-based word selection (MLT). Interestingly, with only five word pairs, MLT achieves notable gains of 17.69% and 21.45% in intent prediction and slot filling performance, respectively, compared to the BASE model. Compared with human word pairs selection MLT, in the intent prediction, MLT beats the performance of human-based word selection, and in slot-filling prediction, the result is on par with the MLT.
In Figure 3(c) and 3(d), we show the transferability of MLT on the target language data that does not have any target keywords selected from the word pair list. Our model with MLT is still able to achieve impressive gains on both intent and slot-filling performance on these data. The results emphasize that the MLT-based model not only memorizes target word replacements, but captures the generic semantics of words and learns to generalize to other words that have a similar vector space, for example, the synonyms “configurer” and “establecer” (both mean “set” in English) or word from the same domain, like “Domingo” (Sunday) and “Lunes” (Monday).
To further support our claims, we extract the attention scores from the attention layer and elaborate on the findings. Figure 5 displays that, in the training phase, our model puts attentions on parallel task-related words in both the source and target languages, such as “Set” and “alarm” in English, and “Configurar” and “alarma” in Spanish. In the zero-shot test phase, our attention layer in the MLT-based models puts an attention on identical or synonym words because they have the same or similar vector representations, respectively, but without MLT, our attention layer fails to do so. Interestingly, we can see clearly in Figure 5 that word ‘Establecer” is as equally important as “Configurer”, although “Establecer” is not found in the code-switching sentence.
We propose attention-informed mixed-language training (MLT), a novel zero-shot adaptation method for cross-lingual task-oriented dialogue systems using code-switching sentences. Our approach utilizes very few task-related parallel word pairs based on the attention layer and has a better generalization to words that have similar semantics in the target language. The visualization of the attention layer confirms this. Experimental results show that MLT-based models outperform existing zero-shot adaptation approaches in dialogue state tracking and natural language understanding with many fewer resources.
-  (2017) Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5, pp. 135–146. Cited by: Zero-shot SLU.
-  (2018) XL-nbt: a cross-lingual neural belief tracking framework. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 414–424. Cited by: Introduction, Cross-lingual Transfer Learning.
-  (2018) Word translation without parallel data. In International Conference on Learning Representations (ICLR), Cited by: Introduction, Training and Adaptation, Experimental Setup.
-  (2019) Universal transformers. In International Conference on Learning Representations, External Links: Cited by: Cross-lingual Language Model.
-  (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Cited by: Introduction, Experimental Setup.
-  (2017) Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1615–1625. Cited by: Utterance Encoder.
-  (2018) Loss in translation: learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2979–2984. Cited by: Introduction, Training and Adaptation, Experimental Setup.
-  (2017) Cross-lingual transfer learning for pos tagging without cross-lingual resources. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2832–2838. Cited by: Cross-lingual Transfer Learning.
-  (2016) Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Cited by: Slot Filling.
-  (2019) Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291. Cited by: Introduction, Experimental Setup.
-  (2019) Zero-shot cross-lingual dialogue systems with transferable latent variables. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1297–1303. Cited by: Cross-lingual Transfer Learning.
-  (2017) Neural belief tracker: data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1777–1788. Cited by: Multilingual Task-oriented Dialogue Systems.
-  (2017) Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the Association for Computational Linguistics 5 (1), pp. 309–324. Cited by: Multilingual Task-oriented Dialogue Systems.
-  (2017) Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1470–1480. Cited by: Cross-lingual Transfer Learning.
-  (2017) Cross-lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1, pp. 1946–1958. Cited by: Cross-lingual Transfer Learning.
-  (2019) Cross-lingual transfer learning for multilingual task oriented dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 3795–3805. Cited by: Introduction, Multilingual Task-oriented Dialogue Systems, Table 2.
-  (2018) (Almost) zero-shot cross-lingual spoken language understanding. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6034–6038. Cited by: Cross-lingual Transfer Learning, Table 2.
-  (2019) Transferable multi-domain state generator for task-oriented dialogue systems. arXiv preprint arXiv:1905.08743. Cited by: Introduction.
-  (2018) Multilingual seq2seq training with similarity loss for cross-lingual document classification. In Proceedings of The Third Workshop on Representation Learning for NLP, pp. 175–179. Cited by: Multi. CoVe.
-  (2016) Ten pairs to tag–multilingual pos tagging via coarse mapping between embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1307–1317. Cited by: Cross-lingual Transfer Learning.
-  (2018) Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1458–1467. Cited by: Introduction.