Simulated Chats for Task-oriented Dialog: Learning to Generate Conversations from Instructions

10/20/2020 ∙ by Biswesh Mohapatra, et al. ∙ I-MACX Studios 0

Popular task-oriented dialog data sets such as MultiWOZ (Budzianowski et al. 2018) are created by providing crowd-sourced workers a goal instruction, expressed in natural language, that describes the task to be accomplished. Crowd-sourced workers play the role of a user and an agent to generate dialogs to accomplish tasks involving booking restaurant tables, making train reservations, calling a taxi etc. However, creating large crowd-sourced datasets can be time consuming and expensive. To reduce the cost associated with generating such dialog datasets, recent work has explored methods to automatically create larger datasets from small samples.In this paper, we present a data creation strategy that uses the pre-trained language model, GPT2 (Radford et al. 2018), to simulate the interaction between crowd-sourced workers by creating a user bot and an agent bot. We train the simulators using a smaller percentage of actual crowd-generated conversations and their corresponding goal instructions. We demonstrate that by using the simulated data, we achieve significant improvements in both low-resource setting as well as in over-all task performance. To the best of our knowledge we are the first to present a model for generating entire conversations by simulating the crowd-sourced data collection process

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Building dialog systems typically requires a large collection of conversation logs that a model can use as training data. Crowd-sourcing is a popular method for generating such data-sets and depending on the aspect of dialog modeling being studied, crowd-sourced workers may be asked to annotate existing chat logs for intents and dialog acts Yu and Yu (2019); Rastogi et al. (2020), create dialog summaries Gliwa et al. (2019), converse with each other based on a script Zhang et al. (2018) or converse to accomplish tasks or goals El Asri et al. (2017); Budzianowski et al. (2018); Byrne et al. (2019) etc. For instance, to create datasets for task oriented dialogs, crowd-sourced workers may be provided with a goal (instruction) that describes the task that needs to be accomplished; workers then play the roles of a user and an agent to generate conversations Budzianowski et al. (2018). The user worker begins the conversation by stating their requirement and the agent worker provides information to the user by querying a knowledge base (KB), if required. Together, the two workers interact with each other via natural language to generate conversations that can involve booking restaurant tables, making train reservations, calling a taxi etc. However, creating large crowd-sourced datasets can be time consuming and expensive.

GOAL : You are looking for a train. The train should arrive by 11:45 and should leave on Saturday. The train should go to Cambridge and should depart from Ely . Once you find the train you want to make a booking for 8 people. Make sure you get the reference number. GOLD CHAT: User: I need to find a train for Saturday. Agent: I have many trains that depart saturday. Where will you be departing from and where is your destination? User: I will be departing from Ely and traveling to Cambridge. Agent: Okay, and what time do you want to leave after or arrive by? User: I need to arrive by 11:45. Agent: There are 3 trains here. Do you want me to book any tickets? User: Yes please. I need it to be booked for 8 people. Agent: Booking was successful, the total fee is 28.16 GBP payable at the station .Reference number is : ZVERHBT3. User: Thank you, that is all I need. Agent: You’re welcome, thank you for calling the Cambridge TownInfo centre. Have a great day. SIMULATED CHAT: User: I am looking for a train to Cambridge. Agent: There are many trains available. Where will you be departing from? User: I will be departing from Ely. Agent: How about TR7994 that leaves at 05:35? User: That would work, can you book it for 8 people? Agent: I have booked 8 tickets on that train, the total fee is 35.2 gbp payable at the station. Reference number is ZXQSGIN7. User: thank you. that is all i need. Agent: You are quite welcome. I am glad I could help. Enjoy your trip !

Figure 1: An example of a goal from the MultiWOZ dialog corpus Budzianowski et al. (2018) followed by two dialogs. The first dialog was generated by a pair of crowd-sourced workers by interacting with each other. The second dialog was generated by the user and agent simulators which take the goal into account.

To reduce the cost associated with generating such dialog datasets, recent works have explored methods to automatically create larger datasets from small samples. Such methods include, generating paraphrased utterances of existing conversations using sequence-to-sequence generative approaches Hou et al. (2018); Anaby-Tavor et al. (2020), generating annotations including intent-slots and dialog acts Yoo et al. (2019, 2020b) etc. While it is reasonably straightforward to generate paraphrases for utterances, generating complete conversations directly from goals is significantly more challenging. This is because, unlike generating a paraphrase for an utterance, generating complete conversations requires systems to model the interaction between utterances over multiple conversation turns. Further, these interactions may also require the use of information present in external knowledge bases.

In this paper, we address this challenging problem of generating complete conversations using a goal that describes the task to be accomplished, by simulating the crowd-sourced data creation process. Thus, instead of creating conversations by having crowd-sourced workers chat with each other, we create conversation data by using two bots that simulate a user and an agent crowd-sourced worker.

Similar to the crowd-sourced data collection setup, the user bot has access to the goal while the agent bot has access to a knowledge base (KB). The agent bot maps the dialog history to a belief state (query) that can be executed over a knowledge base (KB) to retrieve a set of results if required. Thus, the two bots are trained to interact with each other to generate conversations conditioned on the goals and the KB. An example of a generated conversation is shown in Figure 1

. We train these bots using 5-30% of real crowd-sourced worker conversations and demonstrate how our simulated chats can be used as an effective data augmentation strategy. At the core of our model we utilize GPT-2 

Radford et al. (2018) - a transformer-based language model trained on a large number of documents crawled from the web. To the best of our knowledge we are the first to present a model for generating entire conversations by simulating the crowd-sourced data collection process. Our experiments show that by using a small number of existing conversations, we are able to train meaningful user and agent bots that in-turn generate new conversations.

Contributions: (1) We present a novel dialog-generation framework that mimics the roles played by crowd-sourced workers to generate complete conversations. (2) We demonstrate that training the simulators with just 5-10% data can serve as an effective method to generate new conversations. We find that using simulation-augmented data to train end-task dialog models in low data settings leads to a performance improvement of 18-25%. (3) We include a qualitative study to demonstrate how changes to the goal result in completely new conversations reflective of the new task.

Related Work

Due to the costs associated with generating large dialog datasets, recent work has explored a variety of methods to artificially generate additional training data. For example, generating paraphrases is a widely used strategy for augmenting training data for dialog models. Paraphrases have been used to improve intent classification Anaby-Tavor et al. (2020), generate alternative conversation turns Gao et al. (2020), improve response ranking Du and Black (2018) etc. Methods to generate paraphrases can vary - these include the use of syntactic parse trees to generate alternatives Du and Black (2018)

, or generative models based on variational autoencoders 

Malandrakis et al. (2019) and sequence to sequence models Gao et al. (2020). Some methods developed for data augmentation exploit dialog-task specific features; for instance, in tasks where dialog-act labels are available, work that uses these labels to alter conversational flow to generate responses has also been explored Zhang et al. (2020). Further, methods that generate new data to improve dialog act classification Yoo et al. (2020a) or even inject noise to improve robustness in dialog act prediction for ASR data Wang et al. (2020) have also been developed.

Recently methods such as GPT-2 and its variants Radford et al. (2018); Brown et al. (2020)

have achieved extensive generalization in natural language understanding and generation across a plethora of tasks, including question-answering, text summarization and machine translation. In contrast to existing methods that modify existing conversations to generate additional data

Zhang et al. (2020); Gao et al. (2020), we propose a new augmentation framework that harnesses the strength of such large scale language models, to simulate the crowd source data collection process and generate entirely new conversations.

Figure 2: A pictorial representation that depicts the generation of a conversation conditioned on the goals and knowledge base (KB). Although not displayed in this diagram, each module (in green) also receives the dialog history as input.

Constrained Dialog Generation

Overview

We assume that the dialog comprises of a sequence of utterances between a user and an agent, that is, where is a user utterance while is an agent utterance. At any given turn , the sequence of utterances prior to the turn, that is, is referred to as dialog context or dialog history. Apart from the dialog , we have access to a set of goals and a knowledge base . The aim is to learn a model that can generate the dialog conditioned on the goals and the knowledge base . That is, we wish to model .

The dialog generation framework mimics the human-to-human data collection approach used in MultiWOZ Budzianowski et al. (2018). The dialog is generated in a sequence of turns alternating between the user bot and the agent bot. The user bot has access to goals while the agent bot can query the knowledge base

. Thus, the joint distribution of the dialog decomposes as follows:

(1)

The dialog history for the first turn, , is an empty set. The first factor in the product on the left corresponds to user bot which conditions on the goals, as well as, the dialog history to output the user utterance. The second product models the distribution of the agent bot over the responses, conditioned on the dialog history, knowledge base and the goals. A pictorial representation of the two interaction between the two bots is shown in Figure 2. We discuss the various modules in the two bots in further detail below. Note that all the modules in Figure 2 (shown in green) also receive dialog history as input which has not been shown in the figure for ease of presentation.

Figure 3: The input and output formats of the various networks of User and Agent Bot. The goal as well as the utterances are expressed as sequence of tokens. The query is expressed as the domain name (such as train, restaurant, attraction etc.,) followed by a sequence of attributes.

User Bot

The user bot generates utterances conditioned on the dialog history and the goals, that is, it models . For the sake of readability, we will remove the turn index from the distribution. As shown in Figure 2, this distribution is modeled in two steps. Firstly, the dialog history and the goals are fed to a response generator module which outputs a pool of candidate responses . A response selector module then assigns a score to each response in the pool. Based on these scores, we define the distribution as follows:

(2)

The candidate response with the highest probability is selected as the next user utterance and sent to the agent bot.

Next, we discuss the various modules in the user bot and how they are trained. The input and output formats for the various networks of these modules are shown in Figure 3.

Response Generator

The aim of response generator module is to output a pool of candidate user utterances for the given dialog history and the goals. To achieve this, an autoregressive distribution over the tokens of the utterance is defined. That is, if , we define a distribution as follows:

(3)

where is the dialog history and refers to all the tokens in before . We finetune the pretrained GPT-2 network to model the above distribution by maximimum likelihood. Specifically, given the tokens in the goals and the dialog history, the GPT-2 network is trained to output the tokens of the user utterance.

While it is possible to sample an utterance from the GPT-2 network via greedy sampling or beam search, this poses several issues. Firstly, autoregressive distributions tend to assign high probability to short utterances. Secondly, commonly occurring utterances in the corpus tend to have higher probability than the informative responses that are less frequent. We observed this behavior with the user and agent bots when the greedy response was selected as the final response for each bot.

Hence, once the network has been trained, we sample multiple user responses from the network via nucleus sampling Holtzman et al. (2019) to obtain a pool of candidate responses . This pool of candidates is fed to the response selector module as shown in Figure 2.

Response Selector

The aim of the response selector module is to assign a score to each candidate response in the pool based on its relevance to the dialog history. We achieve this by feeding the tokens of the dialog history and the response (clubbed with special tokens such as [CLS] and [SEP] as shown in Figure 3) to a Longformer network architecture Beltagy et al. (2020). The network outputs a contextualized embedding for each token. We feed the embedding of the [CLS] token through a linear layer followed by a sigmoid unit. The output of the network corresponds to the score assigned to the response for the given dialog history.

The network is trained to assign high scores to the positive (or ground-truth) responses while assigning low score to the negatively sampled responses. For each gold context-response pair, we provide a total of 10 negative response samples. These samples contain 5 random responses, 2 responses which are already part of the context (in order to stop the response selector from picking such responses) and 3 responses formed by concatenating 2 random responses to discourage the response selector from picking longer candidate responses.

The network is trained via the triplet loss Chechik et al. (2010); Hoffer and Ailon (2015). Specifically, given the dialog history , the ground-truth response and a negatively sampled response , the triplet loss is defined as follows:

(4)

where is the score assigned by the network to the response for the given dialog history . We use in our experiments.

Agent Bot

The agent bot models the distribution of the agent response conditioned on the dialog history , the user utterance and the knowledge base , that is, . This distribution is modeled in four steps as shown in Figure 2. Firstly, the agent bot feeds the dialog history and the last user utterance to the belief state generator module which outputs a belief state of slot-value pairs (also referred to as query). Next, the query is executed over the knowledge base and a set of entities , whose attributes match the values in the query, are returned. The entities, the belief state, the dialog history and the user utterance are fed to the response generator which outputs a pool of candidate responses. Finally, the responses in the pool are scored by the response selector. Based on these scores, we define the distribution of the agent response as follows:

(5)

where is the score of the candidate response. The candidate response with the highest probability is selected and sent to the user bot to generate the next turn. This interaction between the user and agent bots is repeated until the user bot outputs the end-of-dialogue token.

Next, we discuss in detail about the modules in the agent bot and how these modules are trained. Note that these modules do not share weights with the corresponding modules of the user bot. The input and output formats for the various networks of these modules are shown in Figure 3.

Belief State (query) Generator

The aim of the belief state generator is to generate a belief state for the given dialog history and last user utterance. Here, belief state is a sequence of pairs of the form attribute_name=attribute_value . To achieve this, we define a distribution over the belief states that can be executed over the knowledge base. The belief state generator treats the belief state as a sequence of tokens and train a GPT-2 network to model the distribution of the belief state tokens given the tokens of the dialog history and user utterance. Once the belief state generator has been trained, a belief state is sampled by greedy sampling and executed over the knowledge base.

Response Generator

This module mimics the response generator of the user bot with the exception that the input to the GPT-2 network comprises the dialog history, the last user utterance, the belief state and the KB results. The GPT-2 network is used to define an autoregressive distribution over the tokens of the agent response and is trained using maximum likelihood. Once the module is trained, a pool of candidate responses is sampled via nucleus sampling.

Response Selector

This module outputs the score of each agent response in the candidate pool. To achieve this, the dialog history, the last user utterance and the agent response are fed to the Longformer network architecture (clubbed with [CLS] and [SEP] tokens). The contextualized embedding of the [CLS] token is fed to a linear layer followed by a sigmoid unit. The training of this network as well as the selection of negative samples mimics the training of the response selector for the user bot. Once the model has been trained, it outputs a score for each agent response in the candidate pool.

The user and the agent bot continue to interact with each other until the end-of-dialogue token is output by the user bot. All the user and agent utterances created till this juncture as well as the belief states and KB results comprise the generated dialog.

Experiments

In this section, we experiment with our data generation framework. We study the following research questions: (1) Are the simulated chats generated by our user and agent bots useful? (2) Does the query generator in the agent bot generate meaningful queries, (3) Can the simulated conversations be used to augment the training data in low (5% of training data), medium (10% of training data) and full data (100% of training data) settings, (4) Can our simulators adapt to changes in input goals and reflect them in the generated dialog?

Dataset

We use MultiWOZ 2.1 dataset Budzianowski et al. (2018) to study our simulators. MultiWOZ is a large scale multi-domain dialogue dataset consisting of 10438 conversations distributed across domains: Attraction, Train, Police, Hotel, Hospital, Restaurant and Taxi. Each conversation is associated with a goal that was used by the crowd-sourced workers to generate the conversations. The dataset is divided into training set (8430 conversations), validation set (1000 conversations) and test set (1000 conversations). 30% of the dataset consists of conversations with a single goal i.e, they require accomplishing just one task. The rest are multi-goal dialogues, i.e, conversations accomplish more than one task – example, booking a train followed by making a restaurant reservation.

End-task dialog model

The dialogs in the training data are augmented with the generated dialogs and used for training an end-task dialog model. The end-task is to generate a response for a given dialog history on the MultiWOZ data set. We could use any existing model developed for the MultiWOZ task as our end-task model. In contrast to recent state-of-the-art models such as DAMD Zhang et al. (2020), SimpleTOD Hosseini-Asl et al. (2020) and PARG Gao et al. (2020), our simulators do not generate dialog-acts which are heavily used by these models. Thus, we choose to implement a simple end-task model based on GPT2 which takes in the current context, belief state(query) and KB results as input, to generate final responses, using greedy sampling. The agent model generates delexicalised responses using the format followed by MultiWOZ Budzianowski et al. (2018). For example, ‘archway house is located in south’ after delexicalisation becomes ‘[hotel_name] is located in [value_area]’. The end-task model uses the same architecture as the Agent bot but it does not use response selectors and instead directly generates responses using greedy sampling.

Data Generation using Simulators

As mentioned previously, our simulator allows the generation of new conversations based on a goal. In our experiments, we operate our simulators using 5%, 10% and 30% of the original training data. In each setting, we generate an equal number of conversations using the single-goal data. In addition, to generate multi-goal conversations, we concatenate single-goal generated conversations from different domains. We generate twice as many multi-goal conversations as compared to single-goal conversations to mimic the distribution of the full MultiWOZ dataset. Thus we augment of the conversations of the original training data with 3-times as many conversations to obtain a total augmented size of .

Recall that each conversation requires KB queries by the agent. Our agent simulator generates queries as described earlier and thus, while training the end-task dialog models using the simulated data, we use these generated values as the oracle belief state. Similar to existing work on this dataset, we use delexicalised agent utterances using the format followed by MultiWOZ Budzianowski et al. (2018) which are later updated with KB values based on the results of the query.

Hyperparameter Settings

In order to generate reasonable conversations from small amounts of training data, we train separate models for each domain (restaurant, train, hotel etc.,) using single-goal dialogues from the training dataset. For each domain, we create separate user bots and agent bots along with their constituent modules consisting of query models (for tracking belief state), response generators and response selectors. We use GPT2-small (12 layered, 768 hidden size, 117M parameters) from the ‘Transformers’ library by Huggingface Wolf et al. (2019) for the response generator . For response selectors, we use Longformers (12 layered, 1024 hidden size, 149M parameters) Beltagy et al. (2020) for both user and agent models. We train on 5%, 10% and 30% of the training data with a learning rate of 1e-5. Adam optimizer with default settings is used for all the models.

GOAL : You are looking for a particular hotel . Its name is called bridge guest house. Make sure you get hotel type and phone number . User: hi, i am looking for information on the bridge guest house. GREEDY RESPONSE: Agent Response: i have [value_count] guesthouses and [value_count] hotel -s that fit that criteria . do you have a preference for price range? NUCLEUS DECODING SAMPLES: Response 1: i have [value_count] guesthouses available, how many are in your area or price range? Score : 0.54 Response 2: i have [value_count] options, [hotel_name] and [hotel_name], both of which offer free wifi and parking. do you have any other preferences? Score : 0.31 Response 3: there are several guesthouses in the [value_pricerange] price range. do you have a preference? Score : 0.54 Response 4: i have [value_count] results. what area would you like to stay in? Score : 0.52 Response 5: i found the [hotel_name], which is located on the [value_area] side of town in the [value_pricerange] price range. would you like to book a room? Score : 0.89

Figure 4: An example of a goal along with a context from the MultiWOZ dialog corpus responses generated using Greedy and Nucleus Sampling methods. For nucleus sampling we show the top 5 responses and their respective scores allocated by our response scorer. The response highlighted in blue was finally chosen by the model. The above dialogs are generated using generators trained on 10% of the dataset.

ORIGINAL GOAL : You are looking for a restaurant . The restaurant should be in the expensive price range and should serve italian food . Once you find the restaurant you want to book a table for 5 people at 11:30 on sunday . If the booking fails how about 10:30 . Make sure you get the reference number . User: hi, i’m trying to find an expensive italian restaurant. Agent: i have found several [value_pricerange] [value_food] restaurant -s . is there a specific area of town that you are looking for ? User: i would like to go to the south, please. Agent: there is an [value_pricerange] restaurant called [restaurant_name] in the [value_area] . would you like to make a reservation? User: yes, i’d like to book a table for 5 at 11:30 on sunday. Agent: i am sorry , but we did not get that reservation . would you like to try another day or time slot? User: can you try for 10:30 instead? Agent: booked at [restaurant_name] for [value_count] people . reference number is [restaurant_reference] . User: thank you so much. that’s all i needed. bye. Agent: thank you for contacting towninfo centers . enjoy your dining experience. PERTURBED GOAL : You are looking for a restaurant . The restaurant should be in the cheap price range and should serve indian food . Restaurant should be in north. Once you find the restaurant you want to book a table for 5 people at 11:30 on sunday. If the booking fails how about 10:30. Make sure you get the reference number. User: hi! can you help me find a cheap indian restaurant in the north? Agent: sure , [restaurant_name] is in the [value_area] part of town . it is located at [restaurant_address]. User: i would like to book a table for 5 people at 11:30 on sunday. Agent: i am sorry booking was unsuccessful , do you need to find another restaurant ? User: can you try for 10:30 ? Agent: i have booked you a table for [value_count] people at [value_time] on [value_day] . your reference number is [restaurant_reference] . User: thank you so much. that is all i needed today. bye. Agent: enjoy your meal !

Figure 5: Examples of generated samples using an original goal from MultiWOZ dataset against a perturbed goal with minor changes.

Metrics

We evaluate the usefulness of our generated data by using it to train a dialog model for the end-task. We therefore use BLEU, and rates as originally defined by Budzianowski et al., along with combined score Mehri et al. (2019) given by, . While BLEU is used to evaluate the fluency of the generated response, and measure the relevance of the agent utterances. Specifically, the Rate measures the correctness of the entity provided by the agent at a particular conversation turn, while the Rate measures how often the agent was able to provide correct attributes when requested by the user.

Results

We compare the performance of the GPT2 based end-task dialog model by training it using 5%, 10%, 30% of the MultiWOZ training data as well as by additionally including data generated using our simulators.

Original
Dataset
Size (%)
Additional
Simulated
Data
Belief
State
BLEU Inform Success Combined
5 No Oracle 7.12 63.2 34.4 55.92
5 Yes Oracle 9.22 73.2 42.6 67.12
10 No Oracle 9.66 63.8 38.9 61.01
10 Yes Oracle 10.84 78.2 52.9 76.39
30 No Oracle 10.45 68.6 41.2 65.35
30 Yes Oracle 12.45 77 52.3 77.1
5 No Generated 6.85 19.3 10.2 21.6
5 Yes Generated 9.86 54.7 31.9 53.16
10 No Generated 9.49 52.3 29.9 50.59
10 Yes Generated 10.73 61.2 40.6 61.63
30 No Generated 9.52 50.9 24.9 47.42
30 Yes Generated 12.38 59.4 38.3 61.23
Table 1: Performance of the end-task model when trained on different dataset sizes with and without the use of additional simulated data

Data Augmentation in Low Data Settings

As can be seen in Table 1, the additional use of data generated by our simulators results in a significant improvement on the Combined metric. For instance, when using the oracle belief states in the end-task model, the use of our simulated data results in a 18-25% improvement. The improvements in performance suggest that the conversations generated by the simulators are meaningful. Further, recall that the end-task model is trained to generate queries (belief states).

The original training data includes the queries (belief states) created by crowd-sourced workers while in the case of the simulated data, these are created by the agent bot using the query generator module. Does the end-task model learn how to generate queries using this simulated data? As can be seen from the lower half of Table 1, when the end-task model itself generates queries, the performance gains continue to be significant even though it is trained on simulated data. This suggests our simulator is also able to generate meaningful belief states via the query generator. It is interesting to note that when using generated belief states, the use of simulated data in low data-settings (5%) results in a performance improvement of 146% (Combined Metric).

Model
Belief
State
BLEU Inform Success Combined
DAMD Zhang et al. (2020) Oracle 17.3 80.3 65.1 90
MogNet* Pei et al. (2020) Oracle 19.03 73.4 63.4 87.43
SimpleTOD* Hosseini-Asl et al. (2020) Oracle 16.01 79.3 65.4 88.36
GPT2 Oracle 15.95 72.8 63.7 84.2
GPT2 with Simulated Chats Oracle 15.06 80.4 62.2 86.36
DAMD Zhang et al. (2020) Generated 18 72.4 57.7 83.05
SimpleTOD* Hosseini-Asl et al. (2020) Generated 14.99 83.4 67.1 90.24
GPT2 Generated 15.94 66.2 55.4 76.74
GPT2 with Simulated Chats Generated 14.62 72.5 53.7 77.72
Table 2: Performance of models using the full dataset. Note that all models except for ‘GPT2’ and ‘GPT2 with Simulated Chats’ use dialog acts. ‘*’ indicates results reported by authors in their papers

.

Data Augmentation on Full Data

Since our simulated data helps improve performance of the end-task dialog model in low data settings, we also study whether it can help improve the performance of dialog model when it used to augment the full MultiWOZ training data.

We include additional simulated data from the 30% setting described previously along with the full MultiWOZ Dataset to train our GPT2 based end-task model. As can be seen be seen in Table 2 the additional use of simulated data on the full training data, results in a 1-3% gain in performance. For comparison we also include the performance of recent state-of-the-art methods on MultiWOZ 2.1. We find that the performance of our simple GPT2-based end-task model trained using our simulated conversations is comparable to recent state-of-the-art models such as SimpleTOD Hosseini-Asl et al. (2020) and MogNet Pei et al. (2020) when they use oracle belief states. However, when using the generated belief state we notice the performance drop in our end-task model is larger as compared other models. We hypothesize that this may be because all other models also use dialog-acts in their input which are useful features for generating responses. Further, due to dependence of these models on dialog acts, we were unable to demonstrate their performance using our simulated data for augmentation. We note, however, that in future our simulators could also be extended to generate dialog acts, similar to our belief-state generators.

Qualitative Study - Response Selector

Figure  4 shows an incorrect response generated by greedy decoding. While the user was asking for information about a particular hotel named Bridge Guest House, the greedy response failed to provide the correct information. The response selector however, is able to choose from a wider set of responses generated via nucleus sampling to return the correct response.

Qualitative Study - Goal Perturbation

We now present a qualitative study demonstrating how our simulator is able to accommodate changes to a goal and reflect them in a conversation. Figure  5 shows the generated dialogs from an original goal in MultiWOZ and another from a goal created by perturbing the original goal. The generated dialogs demonstrate the robustness of our generator model which is able to produce new and meaningful conversations using new entities from perturbed goal. Further, the dialogues generated are very different from each other which shows the wide variety of conversations the simulators are capable of producing, when provided with similar goals.

Conclusion

In this paper, we demonstrated a dialog generation framework that mimics the data creation process employed by crowd-sourced workers. We find that our method is able to generate meaningful conversations that aids the training of end-task dialog models in both, low resource and full data settings. The use of additional simulated data to train end-task dialog models result in a performance improvement of 18-25% in low resource settings, and when combined with full training data, we find that the performance of a simple GPT2 based end-task model becomes comparable to current state-of-the-art models. The simulation-framework does not make strict assumptions about the domain or dataset and it would be interesting to explore its use in other dialogue tasks such as Persona-Chat Zhang et al. (2018) in future work.

References

  • A. Anaby-Tavor, B. Carmeli, E. Goldbraich, A. Kantor, G. Kour, S. Shlomov, N. Tepper, and N. Zwerdling (2020)

    Do not have enough data? deep learning to the rescue!

    .
    In

    The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020

    ,
    pp. 7383–7390. External Links: Link Cited by: Introduction, Related Work.
  • I. Beltagy, M. E. Peters, and A. Cohan (2020) Longformer: the long-document transformer. arXiv preprint arXiv:2004.05150. Cited by: Response Selector, Hyperparameter Settings.
  • T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei (2020) Language models are few-shot learners. External Links: 2005.14165 Cited by: Related Work.
  • P. Budzianowski, T. Wen, B. Tseng, I. Casanueva, S. Ultes, O. Ramadan, and M. Gašić (2018) MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In

    Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

    ,
    Brussels, Belgium, pp. 5016–5026. External Links: Link, Document Cited by: Simulated Chats for Task-oriented Dialog: Learning to Generate Conversations from Instructions, Figure 1, Introduction, Overview, Dataset, End-task dialog model, Data Generation using Simulators, Metrics.
  • B. Byrne, K. Krishnamoorthi, C. Sankar, A. Neelakantan, B. Goodrich, D. Duckworth, S. Yavuz, A. Dubey, K. Kim, and A. Cedilnik (2019) Taskmaster-1: toward a realistic and diverse dialog dataset. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). External Links: Link, Document Cited by: Introduction.
  • G. Chechik, V. Sharma, U. Shalit, S. Bengio, S. E. Sonnenburg, V. E. Franc, E. E. Yom-Tov, and M. E. Sebag (2010)

    Large scale online learning of image similarity through ranking.

    .

    Journal of Machine Learning Research

    .
    Cited by: Response Selector.
  • W. Du and A. Black (2018) Data augmentation for neural online chats response selection. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, Brussels, Belgium, pp. 52–58. External Links: Link, Document Cited by: Related Work.
  • L. El Asri, H. Schulz, S. Sharma, J. Zumer, J. Harris, E. Fine, R. Mehrotra, and K. Suleman (2017) Frames: a corpus for adding memory to goal-oriented dialogue systems. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbrücken, Germany, pp. 207–219. External Links: Link, Document Cited by: Introduction.
  • S. Gao, Y. Zhang, Z. Ou, and Z. Yu (2020) Paraphrase augmented task-oriented dialog generation. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. External Links: Link, Document Cited by: Related Work, Related Work, End-task dialog model.
  • B. Gliwa, I. Mochol, M. Biesek, and A. Wawer (2019) SAMSum corpus: a human-annotated dialogue dataset for abstractive summarization. Proceedings of the 2nd Workshop on New Frontiers in Summarization. External Links: Link, Document Cited by: Introduction.
  • E. Hoffer and N. Ailon (2015) Deep metric learning using triplet network. In

    International Workshop on Similarity-Based Pattern Recognition

    ,
    pp. 84–92. Cited by: Response Selector.
  • A. Holtzman, J. Buys, L. Du, M. Forbes, and Y. Choi (2019) The curious case of neural text degeneration. In International Conference on Learning Representations, Cited by: Response Generator.
  • E. Hosseini-Asl, B. McCann, C. Wu, S. Yavuz, and R. Socher (2020) A simple language model for task-oriented dialogue. arXiv preprint arXiv:2005.00796. Cited by: End-task dialog model, Data Augmentation on Full Data, Table 2.
  • Y. Hou, Y. Liu, W. Che, and T. Liu (2018) Sequence-to-sequence data augmentation for dialogue language understanding. In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, New Mexico, USA, pp. 1234–1245. External Links: Link Cited by: Introduction.
  • N. Malandrakis, M. Shen, A. Goyal, S. Gao, A. Sethi, and A. Metallinou (2019) Controlled text generation for data augmentation in intelligent artificial agents. Proceedings of the 3rd Workshop on Neural Generation and Translation. External Links: Link, Document Cited by: Related Work.
  • S. Mehri, T. Srinivasan, and M. Eskénazi (2019) Structured fusion networks for dialog. CoRR abs/1907.10016. External Links: Link, 1907.10016 Cited by: Metrics.
  • J. Pei, P. Ren, C. Monz, and M. de Rijke (2020) Retrospective and prospective mixture-of-generators for task-oriented dialogue response generation. In 24th European Conference on Artificial Intelligence, Cited by: Data Augmentation on Full Data, Table 2.
  • A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2018) Language models are unsupervised multitask learners. External Links: Link Cited by: Simulated Chats for Task-oriented Dialog: Learning to Generate Conversations from Instructions, Introduction, Related Work.
  • A. Rastogi, X. Zang, S. Sunkara, R. Gupta, and P. Khaitan (2020)

    Towards scalable multi-domain conversational agents: the schema-guided dialogue dataset

    .
    Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), pp. 8689–8696. External Links: ISSN 2159-5399, Link, Document Cited by: Introduction.
  • L. Wang, M. Fazel-Zarandi, A. Tiwari, S. Matsoukas, and L. Polymenakos (2020) Data augmentation for training dialog models robust to speech recognition errors. Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI. External Links: Link, Document Cited by: Related Work.
  • T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew (2019) HuggingFace’s transformers: state-of-the-art natural language processing. ArXiv abs/1910.03771. Cited by: Hyperparameter Settings.
  • K. M. Yoo, H. Lee, F. Dernoncourt, T. Bui, W. Chang, and S. Lee (2020a) Variational hierarchical dialog autoencoder for dialog state tracking data augmentation. External Links: 2001.08604 Cited by: Related Work.
  • K. M. Yoo, H. Lee, F. Dernoncourt, T. Bui, W. Chang, and S. Lee (2020b) Variational hierarchical dialog autoencoder for dialog state tracking data augmentation. External Links: 2001.08604 Cited by: Introduction.
  • K. M. Yoo, Y. Shin, and S. Lee (2019)

    Data augmentation for spoken language understanding via joint variational generation

    .
    Proceedings of the AAAI Conference on Artificial Intelligence 33, pp. 7402–7409. External Links: ISSN 2159-5399, Link, Document Cited by: Introduction.
  • D. Yu and Z. Yu (2019) MIDAS: a dialog act annotation scheme for open domain human machine spoken conversations. External Links: 1908.10023 Cited by: Introduction.
  • S. Zhang, E. Dinan, J. Urbanek, A. Szlam, D. Kiela, and J. Weston (2018) Personalizing dialogue agents: I have a dog, do you have pets too?. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, pp. 2204–2213. External Links: Link, Document Cited by: Introduction.
  • S. Zhang, E. Dinan, J. Urbanek, A. Szlam, D. Kiela, and J. Weston (2018) Personalizing dialogue agents: I have a dog, do you have pets too?. CoRR abs/1801.07243. External Links: Link, 1801.07243 Cited by: Conclusion.
  • Y. Zhang, Z. Ou, and Z. Yu (2020) Task-oriented dialog systems that consider multiple appropriate responses under the same context. Proceedings of the AAAI Conference on Artificial Intelligence 34 (05), pp. 9604–9611. External Links: ISSN 2159-5399, Link, Document Cited by: Related Work, Related Work, End-task dialog model, Table 2.