Dialogue systems are developed to support human-to-human interactions in the natural language [Jurafsky2009], and they are widely used in many applications, such as flight booking and hotel reservations. The task-oriented dialogue (ToD) systems commonly rely on modularized systems that use natural language understanding (NLU) to get input’s meaning, separately with dialogue state tracking (DST) to track dialogue state and natural language generation (NLG) to generate suitable output. The benefit of applying this is the efficiency in training and inference during deployment. Recently, [madotto2020learning] show the possibility to utilize end-to-end models to replace the modularized systems, and they perform with decent performance. To implement end-to-end ToD dialogue systems, there are two main ideas: (1) put knowledge base (KB) as input directly into the model [madotto2018mem2seq]. (2) develop a retrieval module to retrieve suitable knowledge from KB according to the input [qin2019entity]. On the other line of work, [madotto2020learning, wu2018end] utilized KB by augmenting samples using delexicalized templates. By applying this method, the trained model could learn the KB directly from the training dataset. By adding more datasets, the models can learn to utilize the knowledge in the context as input. 111The code and dataset are available at https://github.com/sen33/end-to-end-dialogue-system The previous work has been focused on GPT-2 models as the pre-trained language models. However, there is no study yet on other language models, such as BART [lewis2020bart] and T5 [raffel2020exploring]. Both models are built with encoder-decoder architecture differently to GPT-2 that uses a decoder-only model. While, encoder-decoder models are utilized to develop end-to-end dialogue systems [serban2016building, wen2017network]. The model accepts the input with the dialogue history and query and generates responses based on the context.
In this paper, we propose a comparative study to investigate the strength of language models for ToD systems. We also incorporate knowledge to the language models by two different methods: (1) applying Knowledge Embedded (KE) Dialogue [madotto2020learning] to leverage KB entities in delexicalized dialogue templates, and (2) adding KB in the input as context. Our experiment shows that some language models perform better than others for end-to-end ToD systems. We found that the models with pre-trained models produce more fluent responses after incorporating knowledge embedded. Furthermore, we found that BART and T5 outperform GPT-2-based models in both BLEU and F1 scores and achieve state-of-the-art performance in the CamRest dataset [wen2017network].
In this section, we would describe the task of an end-to-end task-oriented dialogue system and how we prepare the dataset.
Ii-a Notation and Task
We define a dialogue dataset where each dialogue has user and system utterances or alternating dialogue turns . For each dialogue sample, we define a query and dialogue history . In the end-to-end dialogue system, we define our generative model as . The model takes as a concatenation of the dialogue history and query as input, and generates an output response . The dialogue history is taken from the previous turns of the query . We fine-tune using the dialogue samples by the conditional generation objective. It trains the model by conditioning to the context. We define the loss as the following:
is the conditional probability of generating tokengiven the previous tokens and the context . On the inference time, greedy search are used to generate the response.
Ii-B Generative Language Models
In this section, we describe the models that are used in this work as the following:
As our baseline model, we train vanilla encoder-decoder models using transformer with multi-head attention [vaswani2017attention] using OpenNMT toolkit [klein-etal-2017-opennmt]. This toolkit has been widely used for training sequence-to-sequence on NLP tasks [muis2020sequence].
Ii-B2 Fine-tuning using Pre-trained Models
Bidirectional and Auto-regressive Transformers (BART)
BART [lewis2020bart] is a language model that is trained using the masked language modeling from BERT [devlin2019bert], and denoising objective to recover the perturbed input.
Text-to-Text Transfer Transformer (T5)
T5 is an encoder-decoder based language models [raffel2020exploring]. This model is trained using BERT [devlin2019bert] training objective by applying the mask to the input tokens.
Ii-B3 Embed Knowledge
To evaluate the effectiveness of adding KB information in the end-to-end ToD systems, we incorporate the knowledge in two ways: (1) the KE Dialogue data augmentation method, and (2) Adding the KB in the context as input, shown in Table I.
Ii-C Dataset Preparation
Dataset is prepared by following the KE Dialogue [madotto2020learning]. Fig 1 shows the overall flow of the system. The dialogue template is extracted from each dialogue by delexicalization (KE-DELEX) using the entities from the dialogue ontology. Then, the templates are embedded with entities from knowledge bases to form knowledge embedded dialogue by relexicalization (KE-RELEX). For every question and answer, one dialogue history and target will be generated. To create the representation of dialogue history, each sentence is concatenated with a special token separator. USR token is concatenated before user’s sentence, and SYS token is concatenated before system’s sentence. These pairs of dialogue history and target will be the input and target of the trained models. To add KB directly into input, special token DTA is concatenated before every entity available from the intermediate API. The example of dialogue history with KB as input is shown in Table I.
|Input||USR i would like a moderately priced restaurant in the north part of town . SYS golden_wok is a moderately priced restaurant in the north side of town . USR what type of food does golden_wok serve ? DTA the_nirala 7_milton_road_chesterton north indian 52.215157,0.125015 01223_360966 moderate cb41uy DTA golden_wok 191_histon_road_chesterton north chinese 52.220757,0.111564 01223_350688 moderate cb43hl|
|Target||the golden_wok serves chinese food . would you like more information ?|
We use CamRest dataset[wen2017network], a human-to-human dialogues dataset for restaurant recommendation in Cambridge. We use a dataset that is already preprocessed and use the code provided by [madotto2020learning] to extract 161 template dialogues and generate Knowledge Embedded Dialogues. 676 dialogues are provided by the CamRest dataset. It is split into 406, 135, and 135 as training data, validation data, and test data, respectively. The templates are generated from training data and augment 9,728 new dialogues to the training data.
Iii-B Model Configuration
For each pre-trained model (BART and T5), there are four hyper-parameters configurations. Each configuration is a combination of batch sizes [8, 16] and learning rate [1e-5, 1e-4]. For BART, we use BART that has 12 layers, attention heads of 16, 3,072 feed-forward, and 768-dimension embeddings with 139M parameters is used. For T5, we use T5
that has 24 layers, 12 attention heads, 3,072 feed-forward, and 768-dimension embeddings with 220M parameters is used. Every configuration use 30 epoch with the early stopping method. Early stopping evaluates the BLEU score from the validation dataset every epoch and picks the best one. For sequence-to-sequence with OpenNMT, there are two hyper-parameter configurations with two model sizes (small and large). All Seq2seq configurations use transformer encoder and decoder. TableII shows the number of parameters of these Seq2seq models. All experiments were conducted on Tesla V100 GPU machines. Adam optimizer is used and learning rate is updated using a linear scheduler.
|Parameter||Small model||Large model|
|Layer||12 (6 enc, 6 dec)||12 (6 enc, 6 dec)|
Iv Results and Analysis
In this section, we report the results, analyze our findings and ablation study, and conduct a human evaluation to measure the quality of our model’s responses.
|Model||Batch size||Learning rate||BLEU||F1|
The result of the pre-trained model is shown in Table III. For sequence-to-sequence, the smaller model achieves a better BLEU and F1 score. For BART model, the best model was achieved by a model that use batch size of 16 with learning rate of 1e-5. However, the difference between the BLEU and F1 with the best model in each metrics is marginal. For T5 model, the best model is the model with batch size of 16 with learning rate of 1e-4. This model achieves the best score in both BLEU and T5 compared to other T5 models.
We show the performance of our best models for BART and T5 in Table IV. Both BART and T5 best model use batch size of 16. While BART achieves better BLEU, T5 achieves a better F1 score. This is caused by how the model was pre-trained. BART is pre-trained by denoising sequence, so the model achieves better BLEU, a metric that shows how fluent the predictions are. Using BART and T5 models for initialization outperform the vanilla sequence-to-sequence model. It implies that pre-trained models have learned the knowledge that are useful for building ToD systems.
We show the comparison of our models with the best hyper-parameters setting in Table V. Seq2Seq models achieve worse performance compared to some baselines, especially GPT-2+KE. Both BART and T5 achieve higher BLEU and F1 scores compared to all baselines.
Iv-B Ablation Study
To compare the effectiveness of applying KE and KB to the language model, ablation study is conducted. Initially, the BART model is chosen as the base model in the experiment. Then, aside from using BART+KE, we also train a model without any augmentation (BART) and a model using KE (BART+KB). For BART+KB, every entity from intermediate API is concatenated to dialogue history with a special token . The result is shown in Table VI. BART achieves better BLEU than BART+KE by a slight margin but falls behind in the F1 score to BART+KB and BART+KE. It means that adding KB directly into input or with KE reduces hallucination, a condition where the generated sequence has good structure and meaning but the wrong entity.
Iv-C Human Evaluation
Human evaluation is done for BART+KB and BART+KE to further measure the humanness of our generation results. A Likert scale [allen2007likert] of 1, 3, and 5 are given to all test predictions by experts. Table VII shows the result of this evaluation. It shows that by using KE Dialogues as training data, the trained model is more robust and more human-like, as demonstrated in Table VII.
The example of input and output is shown in Table VIII. Each model could generate an answer that is understood by a human. Models tend to directly suggest a restaurant’s name instead of asking for specific information.
|Input||USR i am looking for a restaurant that is in the expensive price range and in the south part of town|
|Target||there are results matching your query . would you like mexican , italian , chinese , or indian ?|
|Seq2Seq||the_good_luck_chinese_food_takeaway serves expensive food in the south part of town .|
|BART||peking_restaurant serves expensive food in the south part of town.|
|T5||taj_tandoori serves expensive food in the south part of town.|
|Input||USR i am looking for a restaurant that is in the expensive price range and in the south part of town . SYS there are results matching your query . would you like mexican , italian , chinese , or indian ? USR let ’s go with italian food .|
|Target||frankie_and_bennys is an expensive italian eatery in the south part of town . would you like any additional information about this restaurant ?|
|Seq2Seq||frankie_and_bennys is an expensive restaurant in the south part of town .|
|BART||frankie_and_bennys is an italian restaurant in the south part of town.|
|T5||frankie_and_bennys serves italian food in the south part of town. is there anything else i can help you with?|
V Related Work
The first task-oriented dialogue system is ELIZA [weizenbaum1966eliza], a dialogue system that utilize parsers and rule-based engines. Then, [young2013pomdp]
explored developing dialogue systems by utilizing statistical-based methods using POMDP. Along with the development of machine learning, deep learning received a lot of attention from researchers to develop models on modularized dialogue systems, such as NLU[hakkani2016multi, chen2016end, goo2018slot, liu2020attention], DST [wu2019transferable, lin2020mintl], and NLG start to utilize deep learning approaches. The specificity of modularized dialogue systems leads to an idea where the DST module is bypassed, which is end-to-end dialogue systems. Handling new domains could be achieved by end-to-end dialogue systems with retraining the model, unlike modularized dialogue systems that need to change the DST. To handle KB in the end-to-end dialogue systems, there are two main ideas, using KB directly as input [madotto2018mem2seq] or using intermediate API to retrieve correct KB [qin2019entity]. [madotto2020learning] propose another idea where KBs are embedded into dialogue templates to form KE Dialogue and achieve promising results.
This paper shows the effectiveness of applying pre-trained language models for fine-tuning end-to-end task-oriented dialogue systems and incorporating knowledge bases as context. Using pre-trained language models is essential for initialization to improve the generation results in terms of fluency. Moreover, adding KB to the context improves the correctness by reducing the hallucination. We found that BART and T5 models achieve state-of-the-art performance with higher BLEU and F1 scores compared to GPT-2 models with very similar sizes.
This research is partially funded by Center for Artificial Intelligence of Institut Teknologi Bandung.