Batch Policy Gradient Methods for Improving Neural Conversation Models

02/10/2017 ∙ by Kirthevasan Kandasamy, et al. ∙ Microsoft Carnegie Mellon University 0

We study reinforcement learning of chatbots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chatbot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Chatbots are one of the classical applications of artificial intelligence and are now ubiquitous in technology, business and everyday life. Many corporate entities are now increasingly using chatbots to either replace or assist humans in customer service contexts. For example, Microsoft is currently actively building a chat bot to optimise and streamline its technical support service.

In these scenarios, there is usually an abundance of historical data since past conversations between customers and human customer service agents are usually recorded by organisations. An apparently straightforward solution would be to train chatbots to reproduce the responses by human agents using standard techniques such as maximum likelihood. While this seems natural, it is far from desirable for several reasons. It has been observed that such procedures have a tendency to produce very generic responses (Sordoni et al., 2015). For instance, when we trained chatbots via maximum likelihood on a restaurant recommendations dataset, they repeatedly output responses to the effect of How large is your group?, What is your budget? etc. Further, they also produce responses such as Let me look that up. or Give me a second. which, although permissible for a human agent to say, are not appropriate for a chatbot. Although there are ways to increase the diversity of responses (Li et al., 2015), our focus is on encouraging the bot to meaningfully advance the conversation. One way to address this problem is to provide some form of weak supervision for responses generated by a chatbot. For example, a human labeller, such as a quality assurance agent, could score each response generated by a chatbot in a conversation with a customer. This brings us to the reinforcement learning (RL) paradigm where these rewards (scores) are to be used to train a good chatbot. In this paper we will use the terms score, label, and reward interchangeably. Labelled data will mean conversations which have been assigned a reward of some form as explained above.

Nonetheless, there are some important differences in the above scenario when compared to the more popular approaches for RL.

  • [leftmargin=0.3in]

  • Noisy and expensive rewards: Obtaining labels for each conversation can be time consuming and economically expensive. As a result, there is a limited amount of labelled data available. Moreover, labels produced by humans are invariably noisy due to human error and subjectivity.

  • Off-line evaluations: Unlike conventional RL settings, such as games, where we try to find the optimal policy while interacting with the system, the rewards here are not immediately available. Previous conversations are collected, labelled by human experts, and then given to an algorithm which has to manage with the data it has.

  • Unlabelled Data: While labelled data is limited, a large amount of unlabelled data is available.

If labelled data is in short supply, reinforcement learning could be hopeless. However, if unlabelled data can be used to train a decent initial bot, say via maximum likelihood, we can use policy iteration techniques to refine this bot by making local improvements using the labelled data (Bellman, 1956). Besides chatbots, this framework also finds applications in tasks such as question answering (Hermann et al., 2015; Sachan et al., 2016; Ferrucci et al., 2010), generating image descriptions (Karpathy & Fei-Fei, 2015) and machine translation (Bahdanau et al., 2014) where a human labeller can provide weak supervision in the form of a score to a sentence generated by a bot.

To contextualise the work in this paper, we make two important distinctions in policy iteration methods in reinforcement learning. The first is on-policy vs off-policy. In on-policy settings, the goal is to improve the current policy on the fly while exploring the space. On-policy methods are used in applications where it is necessary to be competitive (achieve high rewards) while simultaneously exploring the environment. In off-policy, the environment is explored using a behaviour policy, but the goal is to improve a different target policy. The second distinction is on-line vs batch (off-line). In on-line settings one can interact with the environment. In batch methods, which is the setting for this work, one is given past exploration data from possibly several behaviour policies and the goal is to improve a target policy using this data. On-line methods can be either on-policy or off-policy whereas batch methods are necessarily off-policy.

In this paper, we study reinforcement learning in batch settings, for improving chat bots with Seq2Seq recurrent neural network (RNN) architectures. One of the challenges when compared to on-line learning is that we do not have interactive control over the environment. We can only hope to do as well as our data permits us to. On the other hand, the batch setting affords us some luxuries. We can reuse existing data and use standard techniques for hyper-parameter tuning based on cross validation. Further, in on-line policy updates, we have to be able to “guess” how an episode will play out, i.e. actions the behaviour/target policies would take in the future and corresponding rewards. However, in batch learning, the future actions and rewards are directly available in the data. This enables us to make more informed choices when updating our policy.

Related Work

Recently there has been a surge of interest in deep learning approaches to reinforcement learning, many of them adopting Q-learning, e.g. 

(Mnih et al., 2013; He et al., 2015; Narasimhan et al., 2015)

. In Q-learning, the goal is to estimate the

optimal action value function . Then, when an agent is at a given state, it chooses the best greedy action according to . While Q-learning has been successful in several applications, it is challenging in the settings we consider since estimating over large action and state spaces will require a vast number of samples. In this context, policy iteration methods are more promising since we can start with an initial policy and make incremental local improvements using the data we have. This is especially true given that we can use maximum likelihood techniques to estimate a good initial bot using unlabelled data.

Policy gradient methods, which fall within the paradigm of policy iteration, make changes to the parameters of a policy along the gradient of a desired objective (Sutton et al., 1999). Recently, the natural language processing (NLP) literature has turned its attention to policy gradient methods for improving language models. Ranzato et al. (2015) present a method based on the classical REINFORCE algorithm (Williams, 1992) for improving machine translation after preliminary training with maximum likelihood objectives. Bahdanau et al. (2016) present an actor-critic method also for machine translation. In both cases, as the reward, the authors use the BLEU (bilingual evaluation understudy) score of the output and the translation in the training dataset. This setting, where the rewards are deterministic and cheaply computable, does not reflect difficulties inherent to training chatbots where labels are noisy and expensive. Li et al. (2016) develop a policy gradient method bot for chatbots. However, they use user defined rewards (based on some simple rules) which, once again, are cheaply obtained and deterministic. Perhaps the closest to our work is that of Williams & Zweig (2016) who use a REINFORCE based method for chat bots. We discuss the differences of this and other methods in greater detail in Section 3. The crucial difference between all of the above efforts and ours is that they use on-policy and/or on-line updates in their methods.

The remainder of this manuscript is organised as follows. In Section 2

we review Seq2Seq models and Markov decision processes (MDP) and describe our framework for batch reinforcement learning. Section 

3 presents our method and compares it with prior work in the RL and NLP literature. Section 4 presents experiments on a synthetic task and a customer service dataset for restaurant recommendations.

2 Preliminaries

2.1 A Review of Seq2Seq Models

The goal of a Seq2Seq model in natural language processing is to produce an output sequence given an input sequence  (Cho et al., 2014; Sutskever et al., 2014; Kalchbrenner & Blunsom, 2013). Here where is a vocabulary of words. For example, in machine translation from French to English, is the input sequence in French, and is its translation in English. In customer service chatbots, is the conversation history until the customer’s last query and

is the response by an agent/chatbot. In a Seq2Seq model, we use an encoder network to represent the input sequence as a euclidean vector and then a decoder network to convert this vector to an output sequence. Typically, both the encoder and decoder networks are recurrent neural networks (RNN) 

(Mikolov et al., 2010)

where the recurrent unit processes each word in the input/output sequences one at a time. In this work, we will use the LSTM (long short term memory

(Hochreiter & Schmidhuber, 1997) as our recurrent unit due to its empirical success in several applications.

In its most basic form, the decoder RNN can be interpreted as assigning a probability distribution over

given the current “state”. At time , the state is the input sequence and the words produced by the decoder thus far, i.e. . We sample the next word from this probability distribution , then update our state where , and proceed in a similar fashion. The vocabulary contains an end-of-statement token . If we sample at time , we terminate the sequence and output .

2.2 A Review of Markov Decision Processes (MDP)

We present a formalism for MDPs simplified to our setting. In an MDP, an agent takes an action in a state and transitions to a state . An episode refers to a sequence of transitions until the agent reaches a terminal state . At a terminal state, the agent receives a reward. Formally, an MDP is the triplet . Here, is a set of states and is a set of actions. When we take an action at state we transition to a new state which, in this work, will be deterministic. will be a finite but large discrete set and will be discrete but potentially infinite. is the expected reward function such that when we receive a reward at state , . Let be a set of terminal states. When we transition to any , the episode ends. In this work, we will assume that the rewards are received only at a terminal state, i.e is nonzero only on .

A policy is a rule to select an action at a given state. We will be focusing on stochastic policies where denotes the probability an agent will execute action at state . We define the value function of policy , where is the expected reward at the end of the episode when we follow policy from state . For any terminal state , regardless of . We will also find it useful to define the action-value function , where is the expected reward of taking action at state and then following policy . With deterministic state transitions this is simply . It can be verified that  (Sutton & Barto, 1998).

2.3 Set Up

We now frame our learning from labels scenario for RNN chatbots as an MDP. The treatment has similarities to some recent RL work in the NLP literature discussed above.

Let be the input and be the words output by the decoder until time . The state of our MDP at time of the current episode will be . Therefore, the set of states will be all possible pairs of inputs and partial output sequences. The actions will be the vocabulary. The terminal states will be such that the last literal of is . The stochastic policy will be a Seq2Seq RNN which produces a distribution over given state . When we wish to make the dependence of the policy on the RNN parameters explicit, we will write . When we sample an action , we deterministically transition to state . If we sample at time , the episode terminates and we observe a stochastic reward.

We are given a dataset of input-output-reward triples where is the sequence of output words. This data was collected from possibly multiple behaviour policies which output for the given input . In the above customer service example, the behaviour policies could be chatbots, or even humans, which were used for conversations with a customer. The rewards are scores assigned by a human quality assurance agent to each response of the chatbot. Our goal is to use this data to improve a given target policy . We will use to denote the distribution of the data. is the distribution of the states in the dataset, is the conditional distribution of an action given a state, and

is the joint distribution over states and actions.

will be determined by the initial distribution of the inputs and the behaviour policies used to collect the training data. Our aim is to find a policy that does well with respect to . Specifically, we wish to maximise the following objective,

Here, the value function is not available to us but has to be estimated from the data. This is similar to objectives used in on-line off-policy policy gradient literature where is replaced by the limiting distribution of the behaviour policy (Degris et al., 2012). In the derivation of our algorithm, we will need to know to compute the gradient of our objective. In off-policy reinforcement learning settings this is given by the behaviour policy which is readily available. If the behaviour policy if available to us, then we can use this too. Otherwise, a simple alternative is to “learn” a behaviour policy. For example, in our experiments we used an RNN trained using the unlabelled data to obtain values for . As long as this learned policy can capture the semantics of natural language (for example, the word apple is more likely than car when the current state is

), then it can be expected to do reasonably well. In the following section, we will derive a stochastic gradient descent (SGD) procedure that will approximately minimise (


Before we proceed, we note that it is customary in the RL literature to assume stochastic transitions between states and use rewards at all time steps instead of the terminal step. Further, the future rewards are usually discounted by a discount factor . While we use the above formalism to simplify the exposition, the ideas presented here extend naturally to more conventional settings.

3 Batch Policy Gradient

Our derivation follows the blueprint in Degris et al. (2012) who derive an off-policy on-line actor critic algorithm. Following standard policy gradient methods, we will aim to update the policy by taking steps along the gradient of the objective .

The latter term inside the above summation is difficult to work with, so the first step is to ignore it and work with the approximate gradient . Degris et al. (2012) provide theoretical justification for this approximation in off policy settings by establishing that for all small enough . Expanding on , we obtain:

Here is the score function of the policy and is the importance sampling coefficient. In the last step, we have used the fact that for any function of the current state (Szepesvári, 2010). The purpose of introducing the value function

is to reduce the variance of the SGD updates – we want to assess how good/bad action

is relative to how well will do at state in expectation. If is a good action ( is large relative to ), the coefficient of the score function is positive and it will change so as to assign a higher probability to action at state .

The functions are not available to us so we will replace them with estimates. For we will use an estimate – we will discuss choices for this shortly. However, the action value function is usually not estimated in RL policy gradient settings to avoid the high sample complexity. A sensible stochastic approximation for is to use the sum of future rewards from the current state (Sutton & Barto, 1998)111 Note for deterministic transitions. However, it is important not to interpret the term in (3) as the difference in the value function between successive states. Conditioned on the current time step, is deterministic, while is stochastic. In particular, while a crude estimate suffices for the former, the latter is critical and should reflect the rewards received during the remainder of the episode. . If we receive reward at the end of the episode, we can then use for all time steps in the episode. However, since is different from we will need to re-weight future rewards via importance sampling . This is to account for the fact that an action given may have been more likely under the policy than it was under or vice versa. Instead of directly using the re-weighted rewards, we will use the so called –return which is a convex combination of the re-weighted rewards and the value function (Sutton, 1984, 1988). In our setting, they are defined recursively from the end of the episode to as follows. For ,

The purpose of introducing is to reduce the variance of using the future rewards alone as an estimate for . This is primarily useful when rewards are noisy. If the rewards are deterministic, which ignores the value function is the best choice. In noisy settings, it is recommended to use (see Sec 3.1 of (Szepesvári, 2010)). In our algorithm, we will replace with where is replaced with the estimate . Putting it all together, and letting denote the step size, we have the following update rule for the parameters of our policy:

In Algorithm 1, we have summarised the procedure where the updates are performed after an entire pass through the dataset. In practice, we perform the updates in mini-batches.

An Estimator for the Value Function: All that is left to do is to specify an estimator for the value function. We first need to acknowledge that this is a difficult problem: is quite large and for typical applications for this work there might not be enough data since labels are expensive. That said, the purpose of in (3), (3) is to reduce the variance of our SGD updates and speed up convergence so it is not critical that this be precise – even a bad estimator will converge eventually. Secondly, standard methods for estimating the value function based on minimising the projected Bellman error require the second derivatives, which might be intractable for highly nonlinear parametrisations of  (Maei, 2011). For these two statistical and computational reasons, we resort to simple estimators for

. We will study two options. The first is a simple heuristic used previously in the RL literature, namely a constant estimator for

which is equal to the mean of all rewards in the dataset (Williams, 1992). The second uses the parametrisation where is the logistic function and is a Euclidean representation of the state. For of the above form, the Hessian can be computed in time. To estimate this value function, we use the estimator from Maei (2011). As we will be using the hidden state of the LSTM. The rationale for this is as follows. In an LSTM trained using maximum likelihood, the hidden state contains useful information about the objective. If there is overlap between the maximum likelihood and reinforcement learning objectives, we can expect the hidden state to also carry useful information about the RL objective. Therefore, we can use the hidden state to estimate the value function whose expectation is the RL objective. We have described our implementation of in Appendix A and specified some implementation details in Section 4.

Given: Data , step size , return coefficient , initial .

  • [leftmargin=0.3in]

  • Set .

  • For each epoch

    • Set

    • For each episode

      • for .

      • For each time step in reverse

        • [leftmargin=0.4in]

        • Compute updates for the value function estimate .

    • Update the policy

    • Update the value function estimate .

Algorithm 1 Batch Policy Gradient ()

Comparison with Other RL Approaches in NLP

Policy gradient methods have been studied extensively in on policy settings where the goal is to improve the current policy on the fly (Williams, 1992; Amari, 1998). To our knowledge, all RL approaches in Seq2Seq models have also adopted on-policy policy gradient updates (Bahdanau et al., 2016; Williams & Zweig, 2016; Ranzato et al., 2015; Li et al., 2016). However, on policy methods break down in off-policy settings, because any update must account for the probability of the action under the target policy. For example, suppose the behaviour policy took action at state and received a low reward. Then we should modify the target policy so as to reduce . However, if the target policy is already assigning low probability to then we should not be as aggressive when making the updates. The re-weighting via importance sampling does precisely this.

A second difference is that we study batch RL. Standard on-line methods are designed for settings where we have to continually improve the target while exploring using the behaviour policy. Critical to such methods are the estimation of future rewards at the current state and the future actions that will be taken by both the behaviour and target policies. In order to tackle this, previous research either ignore future rewards altogether (Williams, 1992), resort to heuristics to distribute a delayed reward to previous time steps (Bahdanau et al., 2016; Williams & Zweig, 2016), or make additional assumptions about the distribution of the states such as stationarity of the Markov process (Maei, 2011; Degris et al., 2012). However, in batch settings, the -return from a given time step can be computed directly (3) since the future action and rewards are available in the dataset. Access to this information provides a crucial advantage over techniques designed for on-line settings.

4 Experiments

Implementation Details: We implement our methods using Chainer (Tokui et al., 2015), and group sentences of the same length together in the same batch to make use of GPU parallelisation. Since different batches could be of different length, we do not normalise the gradients by the batch size as we should take larger steps after seeing more data. However, we normalise by the length of the output sequence to allocate equal weight to all sentences. We truncate all output sequences to length and use a maximum batch size of . We found it necessary to use a very small step size (), otherwise the algorithm has a tendency to get stuck at bad parameter values. While importance re-weighting is necessary in off-policy settings, it can increase the variance of the updates, especially when is very small. A common technique to alleviate this problem is to clip the value (Swaminathan & Joachims, 2015). In addition to single values, our procedure has a product of values when computing the future rewards (3). The effect of large values is a large weight for the score function in step (ii) of Algorithm 1. In our implementation, we clip this weight at which controls the variance of the updates and ensures that a single example does not disproportionately affect the gradient.

RNN Design:

In both experiments we use deep LSTMs with two layers for the encoder and decoder RNNs. The output of the bottom layer is fed to the top layer and in the decoder RNN, the output of the top layer is fed to a softmax layer of size

. When we implement to estimate we use the hidden state of the bottom LSTM as . When performing our policy updates, we only change the parameters of the top LSTM and the softmax layer in our decoder RNN. If we were to change the bottom LSTM too, then the state representation would also change as the policy changes. This violates the MDP framework. In other words, we treat the bottom layer as part of the environment in our MDP. To facilitate a fair comparison, we only modify the top LSTM and softmax layers in all methods. We have illustrated this set up in Fig. 1. We note that if one is content with using the constant estimator, then one can change all parameters of the RNN.

Figure 1: Illustration of the encoder and decoder RNNs used in our experiments. In this example, the input to the encoder is and the output of the decoder is . We use four different LSTMs for the bottom and top layers of the encoder and decoder networks. In our RL algorithms, we only change the top LSTM and the softmax layer of the decoder RNN as shown in red dashed lines.

4.1 Some Synthetic Experiments on the Europarl dataset

To convey the main intuitions of our method, we compare our methods against other baselines on a synthetic task on the European parliament proceedings corpus (Koehn, 2005). We describe the experimental set up briefly, deferring details to Appendix B.1. The input sequence to the RNN was each sentence in the dataset. Given an input, the goal was to reproduce the words in the input without repeating words in a list of forbidden words. The RL algorithm does not explicitly know either goal of the objective but has to infer it from the stochastic rewards assigned to input output sequences in the dataset. We used a training set of input-output-reward triplets for the RL methods.

We initialised all methods by maximum likelihood training on input output sequences where the output sequence was the reverse of the input sequence. The maximum likelihood objective captures part of the RL objective. This set up reflects naturally occurring practical scenarios for the algorithm where a large amount unlabelled data can be used to bootstrap a policy if the maximum likelihood and reinforcement learning objectives are at least partially aligned. We trained the RL algorithms for epochs on the training set. At the end of each epoch, we generated outputs from the policy on test set of inputs and scored them according to our criterion. We plot the test set error against the number of epochs for various methods in Fig. 2.

Fig. 2 compares methods: with and without maximum likelihood initialisation and a version of which does not use importance sampling. Clearly, bootstrapping an RL algorithm with ML can be advantageous especially if data is abundantly available for ML training. Further, without importance sampling, the algorithm is not as competitive for reasons described in Section 3. In all cases, we used a constant estimator for and . The dashed line indicates the performance of ML training alone. is similar to the algorithms of Williams & Zweig (2016); Ranzato et al. (2015) except that there, their methods implicitly use .

Fig. 2 compares methods: and its on-line version with constant () and estimators for . The on-line versions of the algorithms are a direct implementation of the method in Degris et al. (2012) which do not use the future rewards as we do. The first observation is that while is slightly better in the early iterations, it performs roughly the same as using a constant estimator in the long run. Next, performs significantly better than . We believe this is due to the following two reasons. First, the online updates assume stationarity of the MDP. When this does not hold, such as in limited data instances like ours, the SGD updates can be very noisy. Secondly, the value function estimate plays a critical role in the online version. While obtaining a reliable estimate is reasonable in on-line settings where we can explore indefinitely to collect a large number of samples, it is difficult when one only has a limited number of labelled samples. Finally, we compare with different choices for in Fig. 2. As noted previously, is useful with stochastic rewards, but choosing too small a value is detrimental. The optimal value may depend on the problem.

Figure 2: Results for synthetic experiments.  LABEL:: Comparison of with and without maximum likelihood () initialisation and without importance sampling (). The dotted line indicates performance of alone.  LABEL:: Comparison of with its online counterparts . We compare both methods using a constant estimator () for the value function and .  LABEL:: Comparison of with different values of . All curves were averaged over experiments where the training set was picked randomly from a pool. The test set was the same in all

experiments. The error bars indicate one standard error.

4.2 Restaurant Recommendations

We use data from an on-line restaurant recommendation service. Customers log into the service and chat with a human agent asking recommendations for restaurants. The agents ask a series of questions such as food preferences, group size etc. before recommending a restaurant. The goal is to train a chatbot (policy) which can replace or assist the agent. For reasons explained in Section 1, maximum likelihood training alone will not be adequate. By obtaining reward labels for responses produced by various other bots, we hope to improve on a bot initialised using maximum likelihood.

Data Collection: We collected data for RL as follows. We trained five different RNN chatbots with different LSTM parameters via maximum likelihood on a dataset of conversations from this dataset. The bots were trained to reproduce what the human agent said (output ) given the past conversation history (input ). While the dataset is relatively small, we can still expect our bots to do reasonably well since we work in a restricted domain. Next, we generated responses from these bots on separate conversations and had them scored by workers on Amazon Mechanical Turk (AMT). For each response by the bots in each conversation, the workers were shown the history before the particular response and asked to score (label) each response on a scale of . We collected scores from three different workers for each response and used the mean as the reward.

Policies and RL Application: Next, we initialised bots via maximum likelihood and then used to improve them using the labels collected from AMT. For the bots we used the following LSTM hidden state size , word embedding size and parameters. These parameters were chosen arbitrarily and are different from those of the bots used in data collection described above.

  • Bot-1: , . : , estimator for .

  • Bot-2: , . : , constant estimator for .

Testing: We used a separate test set of conversations which had a total of more than input-output (conversation history - response) pairs. For each Bot-1 and Bot-2 we generated responses before and after applying , totalling responses per input. We then had them scored by workers on AMT using the same set up described above. The same worker labels the before-and after-responses from the same bot. This controls spurious noise effects and allows us to conduct a paired test. We collected

before and after label pairs each for Bot-1 and Bot-2 and compare them using a paired t-test and a Wilcoxon signed rank test.

Results: The results are shown in Table 1. The improvements on Bot-2 are statistically significant at the level on both tests, while Bot-1 is significant on the Wilcoxon test. The large p-values for Bot-1 are due to the noisy nature of AMT experiments and we believe that we can attain significance if we collect more labels which will reduce the standard error in both tests. In Appendix B.2 we present some examples of conversation histories and the responses generated by the bots before and after applying . We qualitatively discuss specific kinds of issues that we were able to overcome via reinforcement learning.

Mean () Mean () Paired t-test Wilcoxon
Bot-1 0.07930
Bot-2 0.00017
Table 1: The results on the Mechanical Turk experiments using the restaurant dataset. The first two columns are the mean labels of all responses before and after applying on the bots initialised via maximum likelihood. The last two columns are the p-values using a paired t-test and a paired Wilcoxon signed rank test. For both Bot-1 and Bot-2, we obtained 16,808 before and after responses scored by the same worker. Bot-2 is statistically significant at the level on both tests while Bot-1 is significant on the Wilcoxon test.

5 Conclusion

We presented a policy gradient method for batch reinforcement learning to train chatbots. The data to this algorithm are input-output sequences generated using other chatbots/humans and stochastic rewards for each output in the dataset. This setting arises in many applications, such as customer service systems, where there is usually an abundance of unlabelled data, but labels (rewards) are expensive to obtain and can be noisy. Our algorithm is able to efficiently use minimal labelled data to improve chatbots previously trained through maximum likelihood on unlabelled data. While our method draws its ideas from previous policy gradient work in the RL and NLP literature, there are some important distinctions that contribute to its success in the settings of interest for this work. Via importance sampling we ensure that the probability of an action is properly accounted for in off-policy updates. By explicitly working in the batch setting, we are able to use knowledge of future actions and rewards to converge faster to the optimum. Further, we use the unlabelled data to initialise our method and also learn a reasonable behaviour policy. Our method outperforms baselines on a series of synthetic and real experiments.

The ideas presented in this work extend beyond chatbots. They can be used in applications such as question answering, generating image descriptions and machine translation where an output sentence generated by a policy is scored by a human labeller to provide a weak supervision signal.


We would like to thank Christoph Dann for the helpful conversations and Michael Armstrong for helping us with the Amazon Mechanical Turk experiments.



Appendix A Implementation of

We present the details of the algorithm (Maei, 2011) to estimate a value function in Algorithm 2. However, while Maei (2011) give an on-line version we present the batch version here where the future rewards of an episode are known. We use a parametrisation of the form where is the parameter to be estimated. is the logistic function.

The algorithm requires two step sizes below for the updates to and the ancillary parameter . Following the recommendations in Borkar (1997), we use . In our implementations, we used and . When we run , we perform steps (a)-(f) of Algorithm 2 in step (iii) of Algorithm 1 and the last two update steps of Algorithm 2 in the last update step of Algorithm 1.

The gradient and Hessian of have the following forms,

The Hessian product in step (d) of Algorithm 2 can be computed in time via,

Given: Data , step sizes , return coefficient , initial .

  • [leftmargin=0.3in]

  • Set , .

  • For each epoch

    • Set , .

    • For each episode

      • Set , ,

      • for .

      • For each time step in reverse :

        • [leftmargin=0.3in]

    • .

    • .

Algorithm 2

Appendix B Addendum to Experiments

b.1 Details of the Synthetic Experiment Set up

Given an input and output sequence, we used the average of five Bernoulli rewards , where the parameter was . Here was the fraction of common words in the input and output sequences while where is the fraction of forbidden words in the dataset. As the forbidden words, we used the most common words in the dataset. So if an input had words of which were forbidden, an output sequence repeating of the allowed words and forbidden word would receive an expected score of .

The training and testing set for reinforcement learning were obtained as follows. We trained bots using maximum likelihood on input output sequences as indicated in Section 4.1. The LSTM hidden state size and word embedding size for the bots were, . The vocabulary size was . We used these bots to generate outputs for different input sequences each. This collection of input and output pairs was scored stochastically as described above to produce a pool of input-output-score triplets. From this pool we use a fixed set of triplets for testing across all our experiments. From the remaining data points, we randomly select for training for each execution of an algorithm. For all RL algorithms, we used an LSTM with layers and dimensional word embeddings.

b.2 Addendum to the AMT Restaurant Recommendations Experiment

More Details on the Experimental Set up

We collected the initial batch of training data for RL as follows: We trained, via maximum likelihood on conversations, five RNN bots whose LSTM hidden size and word embedding size were . The inputs were all words from the history of the conversation truncated at length , i.e. the most recent words in the conversation history. The outputs were the actual responses of the agent which were truncated to length . As the vocabulary we use the most commonly occurring words in the dataset and replace the rest with an ¡UNK¿ token.

Using the bots trained this way we generate responses on 1216 separate conversations. This data was sent to AMT workers who were asked to label the conversations on the following scale.

  • 2: The response is coherent and appropriate given the history and advances the conversation forward.

  • 1: The response has some minor flaws but is discernible and appropriate.

  • 0: The response is either completely incoherent or inappropriate and fails to advance the conversation forward.

Some Qualitative Results

In Tables 2 and 3 we have presented some examples. The text in black/grey shows the conversation history, the response in blue is by the bot trained via maximum likelihood () alone and in red is the bot after improvement using our reinforcement learning algorithm.

The first two examples of Table 2 present examples where the algorithm repeated generic questions (on budget, group size etc.) even though they had already been answered previously. After applying , we are able to correct such issues, even though there are some grammatical errors. In the second, third and fourth example, we see that the +bot is able to take context into consideration well when responding. For example, the customer asks for oriental/Mexican/Italian food. While the bot doesn’t take this into consideration, the +bot is able to provide relevant answers. However, in the third example, the name of the restaurant suggests that the food might be Indian and not Mexican. In the final example of Table 2 the customer asks a direct question about smoking. The bot provides an irrelevant answer where as the +bot directly responds to the question.

In some examples, the bot had a tendency to produce sentences that were grammatically correct but nonsensical, sensible but grammatically incorrect, or just complete gibberish. We were able to correct such issues via RL. The first three examples of Table 3 present such cases. Occasionally the opposite happened. The last example of Table 3 is one such instance.

Customer:   Hi there, I’m looking for a restaurant recommendation.
Agent:        How many in your party?
Customer:   5, including children.
:            How many people will be joining you? [0]
+:   That is a fun size, and what is your budget joining you? [1.5]

Customer:   Hey… Sorry, it’s the anniversary of my wife’s death. I always do oriental tonight.
Agent:        I’m sorry to hear that. I will do my best to find just what you need.
Customer:   Thank you very much.
Agent:        What kind of budget are you looking at?
Customer:   Doesn’t matter... $100 sounds like a good number.
Agent:        Will it be just you, or are others going to join you?
Customer:   It’s just me tonight, but if you are off soon I wouldn’t mind a friendly ear. No pressure.
:            What type of budget are you looking to stay within? [0.33]
+:   How about: Café Hungry <UNK> ? They serve sea American to oriental on site such? [1.33]

Customer:   I would love to find a great Mexican restaurant!
Agent:        Great, and will anybody be joining you?
Customer:   There will be a total of 3 of us.
Customer:   Could you find me a place with outdoor seating?
Agent:        How about: The Painted Burrito. They serve Mexican food.
Customer:   Eh, I’ve heard that one isn’t great. Can you please suggest another?
:            How about: The Hungry .w seafood restaurant? they serve sea and American food. [0.33]

I am happy to, one moment. How about: Diva Indian Bistro Boston bar? They serve Mexican food.


Customer:   Hello, I’m looking for a restaurant.
Agent:        I can help! Please tell me what kind of foods you most enjoy. I see you are in the Cambridge area.
Customer:   I would like a restaurant that serves Italian food.
Agent:        Are you looking for something a little less expensive or are you looking for a high end restaurant?
Customer:   I am within a budget, 20 to 50 dollars.
Agent:        I highly recommend The Redhouse Restaurant. It serves …
Customer:   Is there another restaurant? the restaurant would also need to host a party size of 5 people.
:            How about: Amuse? They serve French and American food. [0]
+:   How about: Ocean prime - Boston Seaport? They serve sea, American and Italian food. [2]

Agent:        How about Cafe at Taj Boston?
Customer:   Oh, I just ate there last night. Anything else?
Agent:        Try Mela Restaurant at 578 Tremont St. Anything else I can help you with today?
Customer:   That sounds nice. Do they allow smoking?
:            This sure is within your budget for all. [0.5]
+:   No. [1.5]

Table 2: Some qualitative results on the AMT experiment using restaurant data. In each cell, the text in black/grey show the history of a conversation between the customer and a human agent. The dots mean that the part of the conversation history is hidden. The blue response is by the bot trained using maximum likelihood () alone. The red response is by the bot improved with after . The numbers on the right of the blue/red responses are the average scores each response received from 2-3 AMT workers.

Agent:        How may I help you?
Customer:   I’d like to find a French restaurant.
Agent:        Do you have a budget in mind?
Customer:   Between $50 and $75.
Agent:        How many will be in your party?
Customer:   Just me.
:            And how will you be dining? [0]
+:   How will you be traveling to the restaurant? [2]

Agent:        Hello, how may I help you?
Customer:   We are kinda hungry, any suggestion in mind?
Agent:        What type of cuisine do you like?
Customer:   I want them Americano food.
:            Very evening. Will you be dining alone? [1]
+:   Excellent choice. How many people will be in your party? [2]

Customer:   I’d like to find a good local restaurant.
Agent:        What type of cuisine would you like?
Customer:   I have a hankering for Indian curry.
Agent:        Will you be needing parking?
Customer:   We’ll be driving altogether, so yes.
:            Thank you for 1. How about: T. w your curry this place should be dining? [0]
+:   How about: The Maharaja? They serve Indian and American food. [2]

Agent:        How can I help you today?
Customer:   I would like a restaurant recommendation.
Agent:        Okay what type of resturant?
Customer:   Chinese would be great.
Agent:        Who will be dining, how will you be travelling?
Customer:   Just me and by bus.
:            Okay, thanks. Do you have a budget in mind? [2]
+:   Do you want a wonderful place evening, or whats your party adults, and a place. [0]

Table 3: Qualitative results on the AMT experiment using restaurant data, continued. See caption under Table 2 for more details.