Hierarchical Text Generation and Planning for Strategic Dialogue

12/15/2017 ∙ by Denis Yarats, et al. ∙ 0

End-to-end models for strategic dialogue are challenging to train, because linguistic and strategic aspects are entangled in latent state vectors. We introduce an approach to generating latent representations of dialogue moves, by inducing sentence representations to maximize the likelihood of subsequent sentences and actions. The effect is to decouple much of the semantics of the utterance from its linguistic realisation. We then use these latent sentence representations for hierarchical language generation, planning and reinforcement learning. Experiments show that using our message representations increases the reward achieved by the model, improves the effectiveness of long-term planning using rollouts, and allows self-play reinforcement learning to improve decision making without diverging from human language. Our hierarchical latent-variable model outperforms previous work both linguistically and strategically.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Word-by-word approaches to text generation have been successful in many tasks. However, they have limitations in under-constrained generation settings, such as dialogue response or summarization, where models have significant freedom in the semantics of the text to generate. In such cases, models are prone to overly generic responses that may be valid but suboptimal (Li et al., 2015, 2016; Das et al., 2017)

. Further, such models are uninterpretable and somewhat intellectually dissatisfying because they do not cleanly distinguish between the semantics of language and its surface realization. Entangling form and meaning is problematic for reinforcement learning, where backpropagating caused by semantic decisions can adversely affect the linguistic quality of text

(Lewis et al., 2017), and for candidate generation for longterm planning, as linguistically diverse text may lack semantic diversity.

We focus on negotiation dialogues, where the text generated by the model has consequences than can be easily measured. Substitutions of similar words (for example substituting a ”one” for a ”two”) can have a large impact on the end-task reward achieved by a dialogue agent. We use a hierarchical generation approach for a strategic dialogue agent, where the agent first samples a short-term plan in the form of a latent sentence representation. The agent then conditions on this plan during generation, allowing precise and consistent generation of text to achieve a short-term goal. Doing so, we aim to disentangle the concepts of ”what to say” and ”how to say it”. To do this, we introduce a method for learning discrete latent representations of sentences based on their effect on the continuation of the dialogue.

Recent work has explored hierarchical generation of dialogue responses, where a latent variable is inferred to maximize the likelihood of a message , given previous messages (Serban et al., 2016a, c; Wen et al., 2017; Cao & Clark, 2017), which has the effect of clustering similar message strings. Our approach differs in that the latent variable is optimized to maximize the likelihood of messages and actions of the continuation of the dialogue, but not the message itself. Hence, learns to represent ’s effect on the dialogue, but not the words of .The distinction is important because messages with similar words can have very different semantics; and conversely the same meaning can be conveyed with different sentences. We show empirically and through human evaluation that our method leads both to better perplexities and end task rewards, and qualitatively that our representations group sentences that are more semantically coherent but linguistically diverse.

We use our message representations to improve the strategic decision making of our dialogue agent. We improve the model’s ability to plan ahead by creating a set of semantically diverse candidate messages by sampling distinct , and then use rollouts to identify the an expected reward for each. We also apply reinforcement learning based on the end-task reward. Previous work has found that RL can adversely effect the fluency of the language generated by the model We instead show that simply fine-tuning the parameters for choosing allows the model to substantially improve its rewards while maintaining human-like language.

Experiments show that our approach to disentangling the form and meaning of sentences leads to agents that use language more fluently and intelligently to achieve their goals.

2 Background

2.1 Natural Language Negotiations

We focus on the negotiation task introduced by Lewis et al. (2017), as it possess both linguistic and reasoning challenges. Lewis et al. collected a corpus of human dialogues on a multi-issue bargaining task, where the agents must divide a collection of items of 3 different types (books, hats and balls) between them. Actions correspond to choosing a particular subset of the items, and agents choose compatible actions if each item is assigned to exactly one agent.

More formally, the agents and are initially given a space of possible agreements, and value functions and , which specify a non-negative reward for each agreement . Agents cannot directly observe each other’s value functions and can only infer it through a dialogue. The agents sequentially exchange turns of natural language , consisting of words , until one agent enters a special turn that ends the dialogue. Then, both agents independently enter agreements respectively. If the agreements are compatible, both agents receive a reward based on their action and the value function. If the actions are incompatible, neither agent receives any reward. Training dialogues from an agent’s perspective consist of agreement space , value function , messages and agreement .

2.2 Challenges in Text Generation

We identify a number of challenges for end-to-end text generation for strategic dialogue. These problems have been identified in other text generation settings, but strategic dialogue makes an interesting test case, where decisions have measurable consequences.

  • Lack of semantic diversity: Multiple samples from a model are often paraphrases of the same intent. This lack of a diversity is a problem if samples are later re-ranked by a long-term planning model.

  • Lack of linguistic diversity: Neural language models often capture the head of the distribution, providing less varied language than people (Li et al., 2015).

  • Lack of internal coherence: Messages generated by the model often lack self consistency—for example, I’ll take one hat, and give you all the hats.

  • Lack of contextual coherence: Utterances may also lack coherence given the dialogue context so far. For example, Lewis et al. (2017) identify cases where a model starts a message by indicating agreement, but then proposes a counter offer.

  • Entanglement of linguistic and strategic parameters: End-to-end approaches do not cleanly distinguish between what to say and how to say it. This is problematic as reinforcement learning aiming to improve decision making may adversely affect the quality of the generated language.

We argue that these limitations partly stem from the word-by-word sampling approach to generation, with no explicit plan in advance of generation for what the meaning of the sentence is to be. In §9, we show our hierarchical approach to generation helps with these problems.

write: give me a book and a hat

read: take one book

book:1 hat:0 ball:0

book:0 hat:1 ball:0

book:0 hat:0 ball:1

book:1 hat:0 ball:0

Figure 1:

Action classifier

, which predicts distribution over actions using a GRU with attention.

3 Action Classifier

Initially, we train an action classifier (Figure 1) that predicts the final action chosen at the end of the dialogue. This classifier is used in all versions of our model. We implement the action classifier as an RNN with attention (Bahdanau et al., 2014). We first encode the set of possible actions as , and each sentence as . We then acquire a fixed size representation by applying the following transformations: . Finally, we apply a softmax classifier:

We train this network to minimize the negative log likelihood of an action given a set of possible actions and a dialogue :

4 Baseline Hierarchical Model

As a baseline, we train a hierarchical encoder-decoder model (Figure 2) to maximize the likelihood of training dialogue sentences, similarly to Serban et al. (2016b). The model contains the action-value encoder to input the action space and the value function as ; a sentence encoder that embeds individual messages as ; a sentence level encoder that reads sentence embeddings and the action space encoding to produce dialogue state ; and a decoder that produces message , conditioning on :

The encoder and decoder share a word embedding matrix . We minimize the following loss, over the training set:

write: give me 2 hats

read: take one

read: take one

write: ok deal

Figure 2: Baseline hierarchical model.

5 Learning Latent Message Representations

The central part of our model is a method for encoding messages as discrete latent variables . The goal of this model is to learn message representations that reflect the message’s effect on the dialogue, but abstract over semantically equivalent paraphrases. The discrete nature of the latent variables allows us to efficiently make sequential decisions by choosing at each step to govern the outcome of the dialogue. We show that such approach is helpful for planning and reinforcement learning.

Our representation learning model (Figure 2(a)) has a similar structure to that of §4, except that message embedding is used as input to a stochastic node formed by a softmax with parameters over latent states

. We use expectation maximization to learn how to assign messages to clusters to maximize the likelihood of future messages and actions.

After each message , is updated with representation to give hidden state . From , we train the model to predict the next message and an action . In the training dialogues, there is only an action after the final turn ; for other turns , we use a soft proxy action by regressing to the distribution over actions predicted by . Therefore, is a distribution over what deal would be agreed if the dialogue stopped after message . This action can be thought of as latent proxy for a traditional annotated dialogue state (Williams et al., 2013). When predicting and , the model only has access to latent variables , so must contain useful information about the meaning of . We employ a hierarchical RNN, in which message and the action space encodings are passed through a discrete bottleneck:

We minimize the following loss over the training set:

We optimize latent variables using minibatch Viterbi Expectation Maximization (Dempster et al., 1977). For each minibatch, for each timestep , we compute:

The requires a separate forward pass for each . We then advance to the next timestep using to update , and finally perform an update maximizing:

At convergence, we extract message representations .

write: give me 2 hats

book:0 hat:2 ball:0

read: take one

read: take one

book:0 hat:1 ball:0

write: ok deal

(a) Clustering Model

write: give me 2 hats

read: take one

read: take one

write: ok deal

(b) Full Model
Figure 3: We pre-train a model to learn a discrete encoder for sentences, which bottlenecks the message through discrete representation (Figure 2(a); §5). This architecture forces to capture the most relevant aspects of for predicting future messages and actions. We then extract the learned discrete representations (marked by orange ellipses) and train our full model (Figure 2(b)): is trained to translate representations into messages 6.1), and is trained to predict a distribution over given the dialogue history (§6.2).

6 Hierarchical Text Generation

We then train a new hierarchical dialogue model (Figure 2(b)), which uses pre-trained representations to predict messages

. First, we train a recurrent neural network to predict

. learns how to translate the latent variables into fluent text in context. Then, we optimize a model to maximize the marginal likelihood of training sentences.

6.1 Conditional Language Model

We train to translate pretrained representation and encodings of previous messages into a message :

By minimizing the following loss:

Unlike the baseline model, text generation does not condition explicitly on the agent’s value function , or the action space – all knowledge of the goals and available actions is bottlenecked through the dialogue state. This restriction forces the text generation to depend strongly on .

6.2 Latent Variable Prediction Model

At test time, is not available, as it contains information about the future dialogue. Instead, we train a model to predict conditioned on the current dialogue context , where and :

We optimize to maximize the marginal likelihood of training messages, without updating . The model learns to reconstruct the distribution over that best explains message .

6.3 Decoding

To generate an utterance , the model first samples a predicted plan from :

The model then sequentially generates tokens based on plan and context :

7 Hierarchial Reinforcement Learning

Lewis et al. (2017) experiment with end-to-end reinforcement learning to fine-tune pre-trained supervised models. The model engages in a dialogue with another model, achieving reward . This reward is then backpropagated using policy gradients. One challenge is that because model parameters govern both strategic and linguistic aspects of generation, backpropagating errors can adversely affect the quality of the generated language. To avoid divergence from human language, we experiment with fixing all model parameters, except for the parameters of . This allows reinforcement learning to improve decisions about what to say, without affecting language generation parameters. A similar approach was taken in a different dialogue setting by Wen et al. (2017).

8 Hierarchical Planning

Lewis et al. (2017) propose planning in dialogue using rollouts. First, a set of unique candidate messages are sampled from . Then, multiple rollouts of the future dialogue are sampled from the model, and outcomes are scored according to the value function

, to estimate the expected reward

:

(1)

The expectation is approximated with samples, and the candidate with the highest expected score is returned.

(2)

One challenge is that even though the candidates can be constrained to be different strings, it is difficult to enforce semantic diversity. For example, if all the candidates are paraphrases of the same intent, then the choice makes little difference to the outcome of the dialogue. In order to improve the diversity of candidate generation, we take a hierarchical approach of first sampling unique latent intents from . Then, for each , we choose a candidate turn conditioned on that state:

We then estimate the reward of the candidate message using Equation 1, and finally choose a message as in Equation 2.

9 Experiments

9.1 Training Details

We used the following hyper-parameters:: embeddings and hidden states have 256 dimensions; for each unique agreement space

we learn 50 discrete latent message representations. During training, we optimize the parameters using RMSProp

(Tieleman & Hinton, 2012) with initial learning rate 0.0005 and momentum , clipping of gradients whose

norm exceeds 1. We train the models for 15 epochs with mini-batch size of 16. We then pick the best snapshot according to validation perplexity and anneal the learning rate by a factor of 5 each epoch. For RL, we use a smaller learning rate of 0.0001, and a discount factor

of 0.95. For supervised learning we tuned based on validation perplexity; for RL we measured the average reward in self-play.

9.2 Baselines

We compare the following models:

  • RNN A simple word-by-word approach to generation, similar to Lewis et al. (2017).

  • Hierarchical Baseline model in which the two levels of RNN are connected directly, with no discrete bottleneck (§4), similarly to Serban et al. (2016b).

  • Baseline Clusters Our model (Figure 2(b)) without pretraining the sentence encoder. A latent representation of message is inferred to maximize the likelihood of . This model is closely related to the Latent Intents Dialogue Model (Wen et al., 2017).

  • Full Our full model, where we first pre-train sentence representations to maximize the likelihood , and then we train models to predict and .

To focus the evaluation on the linguistic and strategic aspects of the dialogue, all systems use the same model for predicting the final agreement represented by the dialogue, which is implemented as a bidirectional GRU with attention over the words of the dialogue.

9.3 Likelihood Models

First, we experiment with models using no RL or rollouts.

9.3.1 Perplexity

Models were developed to maximize the likelihood of human dialogues, which is an indicator of how human-like the language is (we observed qualitatively that the two were strongly correlated). Results are shown in Table 1.

The use of a hierarchical RNN model improves performance over a strong baseline from previous work.

Perhaps surprisingly, our hierarchical latent-variable model is also able to achieve state-of-the-art performance. This shows our model’s discrete encodings of messages are as informative for predicting the future dialogue as the more-expressive embeddings used by the hierarchical baseline.

Model
Validation
Perplexity
Test
Perplexity
RNN 5.62 5.47
Hierarchical 5.37 5.21
Baseline Clusters 5.61 5.46
Full 5.37 5.24
Table 1: Likelihood of human dialogues using different models. Our model with discrete message representations is able to achieve state-of-the-art performance, showing that the representations capture relevant aspects of messages for predicting the future dialogue. The size of 95% CI is within 0.03 for each entry.
Model
Score vs.
RNN
Score vs.
Hierarchical
RNN 5.33 5.17
Hierarchical 5.37 5.08
Baseline Clusters 4.68 4.66
Full 6.75 6.57
Table 2: Comparison of different models based on their end-task reward. Our clusters substantially improve reward, indicating that they make it easier for supervised learning to model strategic decision making. The size of 95% CI is within 0.14 for each entry.

9.3.2 Coherence of Clusters

Table 4 shows random samples of messages generated by different clusters from our predicted state model, and the Baseline Clusters model.

Qualitatively, the states from our model show a higher degree of semantic coherence, and higher linguistic variability. Compared to the Baseline Clusters , our approach tends to generate more dissimilar surface strings, but with more similar semantics. Our clusters appear to capture meaning rather than form.

Rollout Type

Score vs.
No Rollouts
Score vs.
Baseline Rollouts
No Rollouts 5.08 4.91
Baseline 7.81 6.57
Diverse 8.41 7.36
Table 3: Comparison of different rollout strategies for the Full . Diverse rollouts use distinct latent variables to create more semantic diversity in rollout candidates, significantly improving performance. The size of 95% CI is within 0.19 for each entry.

Cluster

Baseline Clusters

Full

1

i can give you the books but , i would need the hat and the balls

i would like the hat and 1 book

i can do that . i need both balls and one book

i can’t give up the hat , but i can offer you the book and 2 balls

2

i need both books and the hat

i want the hat

how about you get the hat and 1 ball

i need the hat . you can have all the books and the balls

3

i can not make that deal . i need the hat and one book

i can give you the hat and 1 ball

i can give you the hat and 1 ball

i would like the books and a ball

4

i need two books and the hat

i need the books and the hat

i need the hat , you can have the rest

i can give you the balls but i need the hat and books

5

i can give you the hat if i can have the rest

could i have the books and a ball ?

i want one of each

i would like the books and one ball

Table 4: Messages sampled from different clusters, where 2 books, 1 hat, and 2 balls are available. Our method’s clusters are much more semantically coherent than the baseline, and correspond to different ways of proposing the same deal.

9.3.3 End Task Performance

We measure the performance of the different models on their end-task reward over 1000 negotiations in self-play. Results are shown in Table 2. We find that the use of our latent representations leads to a large improvement in the reward, indicating that our representations make it easier for the supervised model to learn the latent decision making process in the human dialogues it was trained on.

Model
Score vs.
Human
Language
quality
Number
of turns
Full + Rollout 7.45 3.55 4.89
RNN + Rollout 6.99 3.43 4.38
Full + RL 6.26 3.60 6.52
RNN + RL 6.01 3.52 3.99
Full 5.42 3.68 3.07
RNN 5.30 3.56 3.96
Human 6.64 3.85 6.36
Table 5: Performance of our Full model and the highly optimized RNN model against humans. In all cases, our Full model achieves both higher scores and uses higher quality language than RNN .

9.4 Hierarchical Planning

Next, we evaluate different rollout strategies:

  • Baseline Rollouts following Lewis et al. (2017), where first candidate sentences are sampled from the model, and then tokens are sampled iteratively from until reaching the end of the dialogue.

  • Diverse Rollouts where we first choose the top unique from . By choosing unique we aim to increase the semantic diversity of the candidates.

We evaluate compared to the baseline model and word-level rollouts and record the average score. Results are shown in Table 3, and that the Diverse Rollouts that use our message representations lead to a large improvement over previous approaches.

9.5 Finetuning with Reinforcement Learning

A challenge in using RL for end-to-end text generation models is that optimising for reward can adversely affect language generation. In selfplay, the model can learn to achieve a high reward by finding uninterpretable sequences of tokens that the baseline model was not exposed to at training time. We compare several RL approaches:

  • All-RL Reinforcement learning after pre-training with supervised learning.

  • All-RL+SV

    Interleaved RL and supervised learning updates, weighting supervised updates with a hyperparameter

    , similarly to Lewis et al. (2017).

  • Pred-RL Reinforcement learning only to fine-tune the intent model , with all other parameters fixed.

We measure both the average reward of the model (a measure of its ability to achieve its goals) and the perplexity of the model on human dialogues (a measure of how human-like the language is). After hyper-parameter search, we plot the reward of the best model whose perplexity is at most .

Results are shown in Figure 4. Using RL on all parameters allows high rewards at the price of poor quality language. Only fine-tuning allows the model to improve its strategic decision making, while retaining human-like language.

9.6 Human Evaluation

To confirm our empirical results, we evaluate our model in dialogues with people. We ran 1415 dialogues on MTurk, where humans were randomly paired with either one of the models or another human. We then asked humans to rate the language quality of their partner (from 1 to 5). Results are shown in Table 5. We observe that our model consistently outperforms the baseline model (Lewis et al., 2017) both in the end-task reward and the language quality.

Figure 4: Plotting reward against language quality (lower perplexity is better) during reinforcement learning training, in dialogues with the Hierarchical model. Our method (green) achieves higher rewards while maintaining human-like language (top left of graph).

10 Analysis

Results in section 9 show quantitatively that our hierarchical model improves the likelihood of human generated language and the average score achieved by the agent. Here, we investigate specific issues that the model improved on, and identify remaining challenges. We analyzed 1000 dialogues between our Full and the Hierarchical baseline. These models achieve similar perplexity on human dialogues (Table 1).

10.1 Linguistic Diversity

First, we measure the diversity of the agents’ language.

RNN language models are known to prefer overly generic messages. In our task, this often manifests itself as short messages such as deal or ok. We measure the frequency of simple variations on these messages, and find that the Hierarchical model uses generic messages far more often than Full (815 times vs. 245).

The messages sent by Full are also longer on average (8.9 words vs. 6.7, ignoring the end-of-dialogue token), giving further evidence of greater complexity.

We also find that the Full is substantially more creative in generating new messages beyond those seen in its training data. In total, Full sends 875 unique message strings, of which 525 (60%) do not appear in the training data. In contrast, Hierarchical sends fewer unique message strings (751), and just 18% of these are not copied from the training data.

10.2 Self-consistency of Messages

Models can output inconsistent messages, such as I really need the hat. I can give you the hat and one ball. We searched for messages that mentioned the same item type multiple times, and then manually evaluated whether it was consistent. The Full model was more prone to this error than Hierarchical (23 times vs. 11), though this fact may be a consequence of its greater creativity, and the problem only occurred in roughly 1% of messages.

10.3 Consistency with Input

We also investigate whether messages are consistent with the context—for example, models may emit messages such as I’d like the hat and books; you keep the 3 balls

when there are not 3 balls available. We use simple pattern matching for several such errors, and found that the

Full performed slightly better (15 errors vs. 19).

10.4 Consistency with Dialogue Context

Lewis et al. (2017) describe cases where an agent indicates it is simply re-stating an agreement, when it is actually proposing a new deal (e.g. you get 2 hats / Okay deal, so I get 3 hats). Interestingly, we found this behaviour only happened with the models using rollouts. While this tactic is effective against our models, it would be frustrating for humans, and future work should address this issue.

10.5 Repetitiveness

Previous work noted that reinforcement learning models were prone to an extortion tactic of simply repeating the same demand until acceptance. We measured how often agents repeated the same message in a dialogue, comparing the All-RL+SV model based on previous work, with our Pred-RL model. Our model was substantially less repetitive: only 1% of dialogues contained a repetition of the same message, compared to 12% for the baseline.

Input
Diverse Rollouts 1xbook value=9 1xhat value=1 4xball value=0
Baseline Rollouts 1xbook value=0 1xhat value=6 4xball value=1
Diverse Rollouts I will take the book and hat and you can have the balls.
Baseline Rollouts I need the hat and two balls
Diverse Rollouts The balls are worthless, I need the hat and the book.
Baseline Rollouts I need the hat or no deal
Diverse Rollouts Then no deal.
Baseline Rollouts What about the balls?
Diverse Rollouts You can have the hat but I need the book.
Baseline Rollouts How about I get the balls and 1 hat?
Diverse Rollouts Ok
Output Reward
Diverse Rollouts 1xbook 9/10
Baseline Rollouts 1xhat 4xball 10/10
Figure 5: Dialogue between two models using different types of rollouts. The Diverse Rollouts model makes several attempts to win the hat, before compromizing.

11 Related Work

Traditional goal-orientated dialogue models have first generated symbolic intents, capturing the meaning of the message, and then generated text to match the intent (e.g. Williams & Young (2007), Keizer et al. (2017)). Our approach can be seen as a latent model for generating intents. Our model is most closely related to other recent latent variable hierarchical dialogue models from Serban et al. (2016c), Wen et al. (2017) and Cao & Clark (2017). An important difference is that both these approaches optimize latent representations to maximize the likelihood of generating the next message—whereas our model pretrain’s to maximize the likelihood of the continuation of the dialogue, to better capture the semantics of the message rather than its surface form. While other ways of learning discrete latent representations were proposed recently (van den Oord et al., 2017; Kaiser & Bengio, 2018), we have shown that our approach leads to higher performance on a strategic dialogue task.

Other work has explored generating sentence embeddings for open domain text—for example, based on maximizing the likelihood of surrounding sentences (Kiros et al., 2015), supervised entailment data (Conneau et al., 2017), and auto-encoders (Bowman et al., 2015).

12 Conclusion

We have introduced a novel approach to creating sentence representations, within the context of an end-to-end strategic dialogue system, and have shown that our hierarchical approach improves text generation and planning. We identified a number of challenges faced by previous work, and show empirically that our model improves on these aspects. Future work should apply our model to other dialogue settings, such as cooperative strategic dialogue games (He et al., 2017), or multi-sentence generation tasks, such as long document language modelling (Merity et al., 2016).

References