DeepAI
Log In Sign Up

A Comparative Study on Language Models for Task-Oriented Dialogue Systems

The recent development of language models has shown promising results by achieving state-of-the-art performance on various natural language tasks by fine-tuning pretrained models. In task-oriented dialogue (ToD) systems, language models can be used for end-to-end training without relying on dialogue state tracking to track the dialogue history but allowing the language models to generate responses according to the context given as input. This paper conducts a comparative study to show the effectiveness and strength of using recent pretrained models for fine-tuning, such as BART and T5, on endto-end ToD systems. The experimental results show substantial performance improvements after language model fine-tuning. The models produce more fluent responses after adding knowledge to the context that guides the model to avoid hallucination and generate accurate entities in the generated responses. Furthermore, we found that BART and T5 outperform GPT-based models in BLEU and F1 scores and achieve state-of-the-art performance in a ToD system.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

06/11/2022

Building a Personalized Dialogue System with Prompt-Tuning

Dialogue systems without consistent responses are not fascinating. In th...
10/15/2020

Pretrained Language Models for Dialogue Generation with Multiple Input Sources

Large-scale pretrained language models have achieved outstanding perform...
08/15/2022

Efficient Task-Oriented Dialogue Systems with Response Selection as an Auxiliary Task

The adoption of pre-trained language models in task-oriented dialogue sy...
01/15/2021

Grid Search Hyperparameter Benchmarking of BERT, ALBERT, and LongFormer on DuoRC

The purpose of this project is to evaluate three language models named B...
09/28/2020

Learning Knowledge Bases with Parameters for Task-Oriented Dialogue Systems

Task-oriented dialogue systems are either modularized with separate dial...
10/23/2022

Learning to Perform Complex Tasks through Compositional Fine-Tuning of Language Models

How to usefully encode compositional task structure has long been a core...
09/21/2021

Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?

In this paper, we investigate what types of stereotypical information ar...

I Introduction

Dialogue systems are developed to support human-to-human interactions in the natural language [Jurafsky2009], and they are widely used in many applications, such as flight booking and hotel reservations. The task-oriented dialogue (ToD) systems commonly rely on modularized systems that use natural language understanding (NLU) to get input’s meaning, separately with dialogue state tracking (DST) to track dialogue state and natural language generation (NLG) to generate suitable output. The benefit of applying this is the efficiency in training and inference during deployment. Recently, [madotto2020learning] show the possibility to utilize end-to-end models to replace the modularized systems, and they perform with decent performance. To implement end-to-end ToD dialogue systems, there are two main ideas: (1) put knowledge base (KB) as input directly into the model [madotto2018mem2seq]. (2) develop a retrieval module to retrieve suitable knowledge from KB according to the input [qin2019entity]. On the other line of work, [madotto2020learning, wu2018end] utilized KB by augmenting samples using delexicalized templates. By applying this method, the trained model could learn the KB directly from the training dataset. By adding more datasets, the models can learn to utilize the knowledge in the context as input. 111The code and dataset are available at https://github.com/sen33/end-to-end-dialogue-system The previous work has been focused on GPT-2 models as the pre-trained language models. However, there is no study yet on other language models, such as BART [lewis2020bart] and T5 [raffel2020exploring]. Both models are built with encoder-decoder architecture differently to GPT-2 that uses a decoder-only model. While, encoder-decoder models are utilized to develop end-to-end dialogue systems [serban2016building, wen2017network]. The model accepts the input with the dialogue history and query and generates responses based on the context.

In this paper, we propose a comparative study to investigate the strength of language models for ToD systems. We also incorporate knowledge to the language models by two different methods: (1) applying Knowledge Embedded (KE) Dialogue [madotto2020learning] to leverage KB entities in delexicalized dialogue templates, and (2) adding KB in the input as context. Our experiment shows that some language models perform better than others for end-to-end ToD systems. We found that the models with pre-trained models produce more fluent responses after incorporating knowledge embedded. Furthermore, we found that BART and T5 outperform GPT-2-based models in both BLEU and F1 scores and achieve state-of-the-art performance in the CamRest dataset [wen2017network].

Ii Methodology

In this section, we would describe the task of an end-to-end task-oriented dialogue system and how we prepare the dataset.

Ii-a Notation and Task

We define a dialogue dataset where each dialogue has user and system utterances or alternating dialogue turns . For each dialogue sample, we define a query and dialogue history . In the end-to-end dialogue system, we define our generative model as . The model takes as a concatenation of the dialogue history and query as input, and generates an output response . The dialogue history is taken from the previous turns of the query . We fine-tune using the dialogue samples by the conditional generation objective. It trains the model by conditioning to the context. We define the loss as the following:

(1)

where

is the conditional probability of generating token

given the previous tokens and the context . On the inference time, greedy search are used to generate the response.

Ii-B Generative Language Models

In this section, we describe the models that are used in this work as the following:

Ii-B1 Sequence-to-sequence

As our baseline model, we train vanilla encoder-decoder models using transformer with multi-head attention [vaswani2017attention] using OpenNMT toolkit [klein-etal-2017-opennmt]. This toolkit has been widely used for training sequence-to-sequence on NLP tasks [muis2020sequence].

Ii-B2 Fine-tuning using Pre-trained Models

Bidirectional and Auto-regressive Transformers (BART)

BART [lewis2020bart] is a language model that is trained using the masked language modeling from BERT [devlin2019bert], and denoising objective to recover the perturbed input.

Text-to-Text Transfer Transformer (T5)

T5 is an encoder-decoder based language models [raffel2020exploring]. This model is trained using BERT [devlin2019bert] training objective by applying the mask to the input tokens.

Ii-B3 Embed Knowledge

To evaluate the effectiveness of adding KB information in the end-to-end ToD systems, we incorporate the knowledge in two ways: (1) the KE Dialogue data augmentation method, and (2) Adding the KB in the context as input, shown in Table I.

Ii-C Dataset Preparation

Dataset is prepared by following the KE Dialogue [madotto2020learning]. Fig 1 shows the overall flow of the system. The dialogue template is extracted from each dialogue by delexicalization (KE-DELEX) using the entities from the dialogue ontology. Then, the templates are embedded with entities from knowledge bases to form knowledge embedded dialogue by relexicalization (KE-RELEX). For every question and answer, one dialogue history and target will be generated. To create the representation of dialogue history, each sentence is concatenated with a special token separator. USR token is concatenated before user’s sentence, and SYS token is concatenated before system’s sentence. These pairs of dialogue history and target will be the input and target of the trained models. To add KB directly into input, special token DTA is concatenated before every entity available from the intermediate API. The example of dialogue history with KB as input is shown in Table I.

Input USR i would like a moderately priced restaurant in the north part of town . SYS golden_wok is a moderately priced restaurant in the north side of town . USR what type of food does golden_wok serve ? DTA the_nirala 7_milton_road_chesterton north indian 52.215157,0.125015 01223_360966 moderate cb41uy DTA golden_wok 191_histon_road_chesterton north chinese 52.220757,0.111564 01223_350688 moderate cb43hl
Target the golden_wok serves chinese food . would you like more information ?
TABLE I: Example of dialogue history with KB.

Fig. 1: The training and evaluation step of KE Dialogue [madotto2020learning].

Iii Experiment

Iii-a Dataset

We use CamRest dataset[wen2017network], a human-to-human dialogues dataset for restaurant recommendation in Cambridge. We use a dataset that is already preprocessed and use the code provided by [madotto2020learning] to extract 161 template dialogues and generate Knowledge Embedded Dialogues. 676 dialogues are provided by the CamRest dataset. It is split into 406, 135, and 135 as training data, validation data, and test data, respectively. The templates are generated from training data and augment 9,728 new dialogues to the training data.

Iii-B Model Configuration

For each pre-trained model (BART and T5), there are four hyper-parameters configurations. Each configuration is a combination of batch sizes [8, 16] and learning rate [1e-5, 1e-4]. For BART, we use BART that has 12 layers, attention heads of 16, 3,072 feed-forward, and 768-dimension embeddings with 139M parameters is used. For T5, we use T5

that has 24 layers, 12 attention heads, 3,072 feed-forward, and 768-dimension embeddings with 220M parameters is used. Every configuration use 30 epoch with the early stopping method. Early stopping evaluates the BLEU score from the validation dataset every epoch and picks the best one. For sequence-to-sequence with OpenNMT, there are two hyper-parameter configurations with two model sizes (small and large). All Seq2seq configurations use transformer encoder and decoder. Table 

II shows the number of parameters of these Seq2seq models. All experiments were conducted on Tesla V100 GPU machines. Adam optimizer is used and learning rate is updated using a linear scheduler.

Parameter Small model Large model
Step 100k 50k
Batch size 8 16
Learning rate 6.25e-5 6.25e-5
Layer 12 (6 enc, 6 dec) 12 (6 enc, 6 dec)
Attention head 8 8
Feedforward 1024 3072
Embedding 512 768
TABLE II: Parameter for small and large Seq2Seq model

Iv Results and Analysis

In this section, we report the results, analyze our findings and ablation study, and conduct a human evaluation to measure the quality of our model’s responses.

Iv-a Results

Model Batch size Learning rate BLEU F1
BART 8 1e-5 19.740 46.036
BART 16 1e-5 19.050 55.922
BART 8 1e-4 18.240 56.202
BART 16 1e-4 17.930 51.423
T5 8 1e-5 18.140 53.301
T5 16 1e-5 16.490 49.927
T5 8 1e-4 18.330 56.187
T5 16 1e-4 18.730 56.311
TABLE III: Results for pre-trained model with different hyper-parameters.

The result of the pre-trained model is shown in Table III. For sequence-to-sequence, the smaller model achieves a better BLEU and F1 score. For BART model, the best model was achieved by a model that use batch size of 16 with learning rate of 1e-5. However, the difference between the BLEU and F1 with the best model in each metrics is marginal. For T5 model, the best model is the model with batch size of 16 with learning rate of 1e-4. This model achieves the best score in both BLEU and T5 compared to other T5 models.

Model Parameter BLEU F1
Seq2Seq 33M 17.870 49.304
Seq2Seq 101M 16.220 45.438
BART 139M 19.050 55.922
T5 220M 18.730 56.311
TABLE IV: Result for best model configuration.

We show the performance of our best models for BART and T5 in Table IV. Both BART and T5 best model use batch size of 16. While BART achieves better BLEU, T5 achieves a better F1 score. This is caused by how the model was pre-trained. BART is pre-trained by denoising sequence, so the model achieves better BLEU, a metric that shows how fluent the predictions are. Using BART and T5 models for initialization outperform the vanilla sequence-to-sequence model. It implies that pre-trained models have learned the knowledge that are useful for building ToD systems.

Model BLEU F1
KB-Transformer [haihong2019kb] 14.80 45.30
MLMN [reddy2019multi] 13.61 54.85
BoSsNet [raghu2019disentangling] 15.20 43.10
KB-Retriever [qin2019entity] 18.64 55.76
GPT-2 [madotto2020learning] 13.58 34.69
GPT-2+KB [madotto2020learning] 13.59 50.45
GPT-2+KE [madotto2020learning] 18.00 54.85
Seq2Seq+KE 17.870 49.304
BART+KE 19.050 55.922
T5+KE 18.730 56.311
TABLE V: Comparison of results with existing works.

We show the comparison of our models with the best hyper-parameters setting in Table V. Seq2Seq models achieve worse performance compared to some baselines, especially GPT-2+KE. Both BART and T5 achieve higher BLEU and F1 scores compared to all baselines.

Model BLEU F1
BART 19.100 41.580
BART+KB 20.240 56.704
BART+KE 19.050 55.922
TABLE VI: Ablation study without KB, with KB, and with KE.

Iv-B Ablation Study

To compare the effectiveness of applying KE and KB to the language model, ablation study is conducted. Initially, the BART model is chosen as the base model in the experiment. Then, aside from using BART+KE, we also train a model without any augmentation (BART) and a model using KE (BART+KB). For BART+KB, every entity from intermediate API is concatenated to dialogue history with a special token . The result is shown in Table VI. BART achieves better BLEU than BART+KE by a slight margin but falls behind in the F1 score to BART+KB and BART+KE. It means that adding KB directly into input or with KE reduces hallucination, a condition where the generated sequence has good structure and meaning but the wrong entity.

Iv-C Human Evaluation

Human evaluation is done for BART+KB and BART+KE to further measure the humanness of our generation results. A Likert scale [allen2007likert] of 1, 3, and 5 are given to all test predictions by experts. Table VII shows the result of this evaluation. It shows that by using KE Dialogues as training data, the trained model is more robust and more human-like, as demonstrated in Table VII.

Model Likert Score
BART+KB 3.76
BART+KE 4.14
TABLE VII: Human evaluation for BART+KB and BART+KE.

The example of input and output is shown in Table VIII. Each model could generate an answer that is understood by a human. Models tend to directly suggest a restaurant’s name instead of asking for specific information.

Input USR i am looking for a restaurant that is in the expensive price range and in the south part of town
Target there are results matching your query . would you like mexican , italian , chinese , or indian ?
Seq2Seq the_good_luck_chinese_food_takeaway serves expensive food in the south part of town .
BART peking_restaurant serves expensive food in the south part of town.
T5 taj_tandoori serves expensive food in the south part of town.
Input USR i am looking for a restaurant that is in the expensive price range and in the south part of town . SYS there are results matching your query . would you like mexican , italian , chinese , or indian ? USR let ’s go with italian food .
Target frankie_and_bennys is an expensive italian eatery in the south part of town . would you like any additional information about this restaurant ?
Seq2Seq frankie_and_bennys is an expensive restaurant in the south part of town .
BART frankie_and_bennys is an italian restaurant in the south part of town.
T5 frankie_and_bennys serves italian food in the south part of town. is there anything else i can help you with?


TABLE VIII: The examples of the dialogue input and output on different models.

V Related Work

The first task-oriented dialogue system is ELIZA [weizenbaum1966eliza], a dialogue system that utilize parsers and rule-based engines. Then, [young2013pomdp]

explored developing dialogue systems by utilizing statistical-based methods using POMDP. Along with the development of machine learning, deep learning received a lot of attention from researchers to develop models on modularized dialogue systems, such as NLU 

[hakkani2016multi, chen2016end, goo2018slot, liu2020attention], DST [wu2019transferable, lin2020mintl], and NLG start to utilize deep learning approaches. The specificity of modularized dialogue systems leads to an idea where the DST module is bypassed, which is end-to-end dialogue systems. Handling new domains could be achieved by end-to-end dialogue systems with retraining the model, unlike modularized dialogue systems that need to change the DST. To handle KB in the end-to-end dialogue systems, there are two main ideas, using KB directly as input [madotto2018mem2seq] or using intermediate API to retrieve correct KB [qin2019entity]. [madotto2020learning] propose another idea where KBs are embedded into dialogue templates to form KE Dialogue and achieve promising results.

Vi Conclusion

This paper shows the effectiveness of applying pre-trained language models for fine-tuning end-to-end task-oriented dialogue systems and incorporating knowledge bases as context. Using pre-trained language models is essential for initialization to improve the generation results in terms of fluency. Moreover, adding KB to the context improves the correctness by reducing the hallucination. We found that BART and T5 models achieve state-of-the-art performance with higher BLEU and F1 scores compared to GPT-2 models with very similar sizes.

Acknowledgment

This research is partially funded by Center for Artificial Intelligence of Institut Teknologi Bandung.

References