End-to-end optimization of goal-driven and visually grounded dialogue systems

03/15/2017 ∙ by Florian Strub, et al. ∙ Google University of Lille Inria 0

End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision is too simplistic to render the intrinsic planning problem inherent to dialogue as well as its grounded nature, making the context of a dialogue larger than the sole history. This is why only chit-chat and question answering tasks have been addressed so far using end-to-end architectures. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues, based on the policy gradient algorithm. This approach is tested on a dataset of 120k dialogues collected through Mechanical Turk and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex picture.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Ever since the formulation of the Turing Test, building systems that can meaningfully converse with humans has been a long-standing goal of Artificial Intelligence (AI). Practical dialogue systems have to implement a management strategy that defines the system’s behavior, for instance to decide when to provide information or to ask for clarification from the user. Although traditional approaches use linguistically motivated rules 

[Weizenbaum:1966:ECP:365153.365168], recent methods are data-driven and make use of Reinforcement Learning (RL) [lemon2007machine]

. Significant progress in Natural Language Processing via Deep Neural Nets 

[bengio2003neural]

made neural encoder-decoder architectures a promising way to train conversational agents  

[vinyals2015neural, sordoni2015neural, serban2016generative]. The main advantage of such end-to-end dialogue systems is that they make no assumption about the application domain and are simply trained in a supervised fashion from large text corpora [lowe2015ubuntu].

Figure 1: Two example games of the GuessWhat?! dataset. The correct object is highlighted by a green mask.

However, there are many drawbacks to this approach. First, encoder-decoder models cast the dialogue problem into one of supervised learning, predicting the distribution over possible next utterances given the discourse so far. As with machine translation, this may result in inconsistent dialogues and errors that can accumulate over time. This is especially true because the action space of dialogue systems is vast, and existing datasets cover only a small subset of all trajectories, making it difficult to generalize to unseen scenarios [mooney2006learning]. Second, the supervised learning framework does not account for the intrinsic planning problem that underlies dialogue, i.e. the sequential decision making process, which makes dialogue consistent over time. This is especially true when engaging in a task-oriented dialogue. As a consequence, reinforcement learning has been applied to dialogue systems since the late 90s [levin1997learning, Singh1999] and dialogue optimization has been generally more studied than dialogue generation. Third, it doesn’t naturally integrate external contexts (larger than the history of the dialogue) that is most often used by dialogue participants to interact. This context can be their physical environment, a common task they try to achieve, a map on which they try to find their way, a database they want to access etc. It is part of the so called Common Ground, well studied in the discourse literature [COGS:COGS73]. Over the last decades, the field of cognitive psychology has also brought empirical evidence that human representations are grounded in perception and motor systems [barsalou2008grounded]. These theories imply that a dialogue system should be grounded in a multi-modal environment in order to obtain human-level language understanding [DBLP:journals/corr/KielaBVC16]

. Finally, evaluating dialogues is difficult as there is not an automatic evaluation metric that correlates well with human evaluations 

[liu2016not].

On the other hand, RL approaches could handle the planning and the non-differentiable metric problems but require online learning (although batch learning is possible but difficult with low amounts of data [pietquin2011sample]). For that reason, user simulation has been proposed to explore dialogue strategies in a RL setting [eckert1997user, schatzmann2006survey, pietquin2013]. It also requires the definition of an evaluation metric which is most often related to task completion and user satisfaction [walker1997paradise]. In addition, successful applications of the RL framework to dialogue often rely on a predefined structure of the task, such as slot-filling tasks [williams2007partially] where the task can be casted as filling in a form.

In this paper, we present a global architecture for end-to-end RL optimization of a task-oriented dialogue system and its application to a multimodal task, grounding the dialogue into a visual context. To do so, we start from a corpus of 150k human-human dialogues collected via the recently introduced GuessWhat?! game [hvries2016]

. The goal of the game is to locate an unknown object in a natural picture by asking a series of questions. This task is hard since it requires scene understanding and, more importantly, a dialogue strategy that leads to identify the object rapidly. Out of these data, we first build a supervised agent and a neural training environment. It serves to train a DeepRL agent online which is able to solve the task. We then quantitatively and qualitatively compare the performance of our system to a supervised approach on the same task from a human baseline perspective.

Figure 2: Oracle model.

In short, our contributions are:

  • to propose the first multimodal goal-directed dialogue system optimized via Deep RL;

  • to achieve 10% improvement on task completion over a supervised learning baseline.

2 GuessWhat?! game

Figure 3: Guesser model.

We briefly explain here the GuessWhat?! game that will serve as a task for our dialogue system, but refer to [hvries2016] for more details regarding the task and the exact content of the dataset. It is composed of more than 150k human-human dialogues in natural language collected through Mechanical Turk.

2.1 Rules

GuessWhat?! is a cooperative two-player game in which both players see the picture of a rich visual scene with several objects. One player – the oracle – is randomly assigned an object (which could be a person) in the scene. This object is not known by the other player – the questioner – whose goal is to locate the hidden object. To do so, the questioner can ask a series of yes-no questions which are answered by the oracle as shown in Fig 1. Note that the questioner is not aware of the list of objects and can only see the whole picture. Once the questioner has gathered enough evidence to locate the object, he may choose to guess the object. The list of objects is revealed, and if the questioner picks the right object, the game is considered successful.

2.2 Notation

Before we proceed, we establish the GuessWhat?! notations that are used throughout the rest of this paper. A game is defined by a tuple where is a picture of height and width , a dialogue with question-answer pairs , a list of objects and the target object. Moreover, each question is a sequence of length with each token taken from a predefined vocabulary . The vocabulary is composed of a predefined list of words, a question tag that ends a question and a stop token that ends a dialogue. An answer is restricted to be either yes, no or not applicable i.e. . For each object , an object category and a pixel-wise segmentation mask are available. Finally, to access subsets of a list, we use the following notations. If is a double-subscript list, then are the first elements of the list if , otherwise . Thus, for instance, refers to the first tokens of the question and refers to the first question-answer pairs of a dialogue.

Figure 4: Question generation model.

3 Training environment

From the GuessWhat?! dataset, we build a training environment that allows RL optimization of the questioner task by creating models for the oracle and guesser tasks. We also describe the supervised learning baseline to which we will compare. This mainly reproduces baselines introduced in [hvries2016].

Question generation baseline

We split the questioner’s job into two different tasks: one for asking the questions and another one for guessing the object. The question generation task requires to produce a new question , given an image and a history of questions and answers

. We model the question generator (QGen) with a recurrent neural network (RNN), which produces a sequence of RNN state vectors

for a given input sequence by applying the transition function :

We use the popular long-short term memory (LSTM) cell

[hochreiter1997long] as our transition function. In order to construct a probabilistic sequence model, one can add a softmax function that computes a distribution over tokens from vocabulary . In the case of GuessWhat?!, this output distribution is conditioned on all previous questions and answers tokens as well as the image :

(1)

We condition the model on the image by obtaining its VGG16 FC8 features and concatenating it to the input embedding at each step, as illustrated in Fig. 4. We train the model by minimizing the conditional negative log-likelihood:

At test time, we can generate a sample from the model as follows. Starting from the state , we sample a new token from the output distribution and feed the embedded token back as input to the RNN. We repeat this loop till we encounter an end-of-sequence token. To approximately find the most likely question,

, we use the commonly used beam-search procedure. This heuristics aims to find the most likely sequence of words by exploring a subset of all questions and keeping the

-most promising candidate sequences at each time step.

Oracle

The oracle task requires to produce a yes-no answer for any object within a picture given a natural language question. We outline here the neural network architecture that achieved the best performance and refer to [hvries2016] for a thorough investigation of the impact of other object and image information. First, we embed the spatial information of the crop by extracting an 8-dimensional vector of the location of the bounding box where and denote the width and height of the bounding box , respectively. We normalize the image height and width such that coordinates range from to , and place the origin at the center of the image. Second, we convert the object category into a dense category embedding using a learned look-up table. Finally, we use a LSTM to encode the current question . We then concatenate all three embeddings into a single vector and feed it as input to a single hidden layer MLP that outputs the final answer distribution

using a softmax layer, illustrated in Fig. 

2.

Guesser

The guesser model takes an image and a sequence of questions and answers , and predicts the correct object from the set of all objects. This model considers a dialogue as one flat sequence of question-answer tokens and use the last hidden state of the LSTM encoder as our dialogue representation. We perform a dot-product between this representation and the embedding for all the objects in the image, followed by a softmax to obtain a prediction distribution over the objects. The object embeddings are obtained from the categorical and spatial features. More precisely, we concatenate the 8-dimensional spatial representation and the object category look-up and pass it through an MLP layer to get an embedding for the object. Note that the MLP parameters are shared to handle the variable number of objects in the image. See Fig 3 for an overview of the guesser.

3.1 Generation of full games

With the question generation, oracle and guesser model we have all components to simulate a full game. Given an initial image , we generate a question by sampling tokens from the question generation model until we reach the question-mark token. Alternatively, we can replace the sampling procedure by a beam-search to approximately find the most likely question according to the generator. The oracle then takes the question , the object category and as inputs, and outputs the answer . We append to the dialogue and repeat generating question-answer pairs until the generator emits a stop-dialogue token or the maximum number of question-answers is reached. Finally, the guesser model takes the generated dialogue and the list of objects and predicts the correct object.

4 GuessWhat?! from RL perspective

One of the drawbacks of training the QGen in a supervised learning setup is that its sequence of questions is not explicitly optimized to find the correct object. Such training objectives miss the planning aspect underlying (goal-oriented) dialogues. In this paper, we propose to cast the question generation task as a RL task. More specifically, we use the training environment described before and consider the oracle and the guesser as part of the RL agent environment. In the following, we first formalize the GuessWhat?! task as a Markov Decision Process (MDP) so as to apply a policy gradient algorithm to the QGen problem.

4.1 GuessWhat?! as a Markov Decision Process

We define the state as the status of the game at step . Specifically, we define where corresponds to the number of tokens generated since the beginning of the dialogue. An action corresponds to select a new word in the vocabulary . The transition to the next state depends on the selected action:

  • [noitemsep,leftmargin=*]

  • If , the full dialogue is terminated.

  • If , the ongoing question is terminated and an answer is sampled from the oracle. The next state is where .

  • Otherwise the new word is appended to the ongoing question and .

Questions are automatically terminated after words. Similarly, dialogues are terminated after questions. Furthermore, a reward is defined for every state-action pair. A trajectory is a finite sequence of tuples of length which contains a state, an action, the next state and the reward where . Thus, the game falls into the episodic RL scenario as the dialogue terminates after a finite sequence of question-answer pairs. Finally, the QGen output can be viewed as a stochastic policy parametrized by

which associates a probability distribution over the actions (i.e. words) for each state (

i.e. intermediate dialogue and picture).

4.2 Training the QGen with Policy Gradient

While several approaches exist in the RL literature, we opt for policy gradient methods because they are known to scale well to large action spaces. This is especially important in our case because the vocabulary size is nearly 5k words. The goal of policy optimization is to find a policy that maximizes the expected return, also known as the mean value:

(2)

where is the discount factor, the length of the trajectory and the starting state is drawn from a distribution . Note that is allowed as we are in the episodic scenario [sutton1999policy]. To improve the policy, its parameters can be updated in the direction of the gradient of the mean value:

(3)

where denotes the training time-step and is a learning rate such that and .

Thanks to the gradient policy theorem [sutton1999policy]

, the gradient of the mean value can be estimated from a batch of trajectories

sampled from the current policy by:

(4)

where is the state-action value function that estimates the cumulative expected reward for a given state-action couple and

some arbitrarily baseline function which can help reducing the variance of the estimation of the gradient. More precisely

(5)

Notice that the estimate in Eq (4) only holds if the probability distribution of the initial state

is uniformly distributed. The state-action value-function

can then be estimated by either learning a function approximator (Actor-critic methods) or by Monte-Carlo rollouts (REINFORCE [williams1992simple]). In REINFORCE, the inner sum of actions is estimated by using the actions from the trajectory. Therefore, Eq (4) can be simplified to:

(6)

Finally, by using the GuessWhat?! game notation for Eq (6), the policy gradient for the QGen can be written as:

(7)

4.3 Reward Function

One tedious aspect of RL is to define a correct and valuable reward function. As the optimal policy is the result of the reward function, one must be careful to design a reward that would not change the expected final optimal policy [ng1999policy]. Therefore, we put a minimal amount of prior knowledge into the reward function and construct a zero-one reward depending on the guesser’s prediction:

(8)

So, we give a reward of one if the correct object is found from the generated questions, and zero otherwise.

Note that the reward function requires the target object while it is not included in the state . This breaks the MDP assumption that the reward should be a function of the current state and action. However, policy gradient methods, such as REINFORCE, are still applicable if the MDP is partially observable [williams1992simple].

4.4 Full training procedure

For the QGen, oracle and guesser, we use the model architectures outlined in section 3. We first independently train the three models with a cross-entropy loss. We then keep the oracle and guesser models fixed, while we train the QGen in the described RL framework. It is important to pretrain the QGen to kick-start training from a reasonable policy. The size of the action space is simply too big to start from a random policy.

In order to reduce the variance of the policy gradient, we implement the baseline as a function of the current state, parameterized by . Specifically, we use a one layer MLP which takes the LSTM hidden state of the QGen and predicts the expected reward. We train the baseline function by minimizing the Mean Squared Error (MSE) between the predicted reward and the discounted reward of the trajectory at the current time step:

(9)

We summarize our training procedure in Algorithm 1.

1:Pretrained QGen,Oracle and Guesser
2:Batch size
3:for Each update do
4:     # Generate trajectories
5:     for  do
6:         Pick Image and the target object
7:         # Generate question-answer pairs
8:         for  do
9:              
10:              
11:              if   then
12:                  delete and break;                        
13:         
14:               
15:     Define
16:     Evaluate with Eq. (7) with
17:     SGD update of QGen parameters using
18:     Evaluate with Eq. (9) with
19:     SGD update of baseline parameters using
Algorithm 1 Training of QGen with REINFORCE
Figure 5: Task completion ratio of REINFORCE trained QGEN for given dialogue length.

5 Related work

Outside of the dialogue literature, RL methods have been applied to encoder-decoder architectures in machine translation [ranzato2015sequence, bahdanau2016actor] and image captioning [liu2016optimization]. In those scenarios, the BLEU score is used as a reward signal to fine-tune a network trained with a cross-entropy loss. However, the BLEU score is a surrogate for human evaluation of naturalness, so directly optimizing this measure does not guarantee improvement in the translation/captioning quality. In contrast, our reward function encodes task completion, and optimizing this metric is exactly what we aim for. Finally, the BLEU score can only be used in a batch setting because it requires the ground-truth labels from the dataset. In GuessWhat?!, the computed reward is independent from the human generated dialogue.

Although visually-grounded language models have been studied for a long time [roy2002learning], important breakthroughs in both visual and natural language understanding has led to a renewed interest in the field [lecun2015deep]. Especially image captioning [lin2014microsoft] and visual question answering [antol2015vqa] has received much attention over the last few years, and encoder-decoder models [liu2016optimization, lu2016hierarchical] have shown promising results for these tasks. Only very recently the language grounding tasks have been extended to a dialogue setting with the Visual Dialog [das2016visual] and GuessWhat?! [hvries2016] datasets. While Visual Dialog considers the chit-chat setting, the GuessWhat?! game is goal-oriented which allows us to cast it in into an RL framework.

6 Experiments

Image Beam Search REINFORCE Image Beam Search REINFORCE
Is it a person ? no Is it a person ? no Is it a cat ? no Is it a cat ? no
Is it a ball ? no Is a glove ? no Is it a book ? no Is it on the table ? yes
Is it a ball ? no Is an umbrella ? no Is it a book ? no Is it the book ? no
Is it a ball ? no Is in the middle ? no Is it a book ? no Is it fully visible? yes
Is it a ball ? no On a person? no Is it a book ? no
is it on on far right? yes
Failure (blue bat) Success (red chair) Failure (person) Success (bowl)
Is it a person ? yes Is it a person ? yes Is it a bag ? yes Is it a suitcase? yes
Is it the one in front ? yes Is it girl in white ? yes Is it red ? no Is it in the left side ? yes
Is it the one on the left ? no Is it the one in the middle ? no
Is it the one in the middle with the red umbrella ? yes Is it the one on the far right ? no
Is it the one to the right of the girl in ? no Is it the one with the blue bag ? yes
Failure (umbrella) Success (girl) Success (most left bag) Failure (left bag)
Table 1: Samples extracted from the test set. The blue (resp. purple) box corresponds to the object picked by the guesser for the beam-search (resp. REINFORCE) dialogue. The small verbose description is added to refer to the object picked by the guesser.

As already said, we used the GuessWhat?! dataset111Available at https://guesswhat.ai/download that includes 155,281 dialogues containing 821,955 question/answer pairs composed of 4900 words on 66,537 unique images and 134,074 unique objects. The experiments source code is available at https://guesswhat.ai.

6.1 Training details

We pre-train the networks described in Section 3. After training, the oracle network obtains error and the guesser network reports

error on the test set. Throughout the rest of this section we refer to the pretrained QGen as our baseline model. We then initialize our environment with the pre-trained models and train the QGen with REINFORCE for 80 epochs with plain stochastic gradient descent (SGD) with a learning rate of 0.001 and a batch size of 64. For each epoch, we sample each training images once, and randomly choose one of its object as the target. We simultaneously optimize the baseline parameters

with SGD with a learning rate of . Finally, we set the maximum number of questions to and the maximum number of words to

6.2 Results

Accuracy

Since we are interested in human-level performance, we report the accuracies of the models as a percentage of human performance (84.4%), estimated from the dataset. We report the scores in Table 2

, in which we compare sampling objects from the training set (New Objects) and test set (New Pictures) i.e. unseen pictures. We report the standard deviation over 5 runs in order to account for the sampling stochasticity. On the test set, the baseline obtains

accuracy, while training with REINFORCE improves to . This is also a significant improvement over the beam-search baseline, which achieves on the test-set. The beam-search procedure improves over sampling from the baseline, but interestingly lowers the score for REINFORCE.

New Objects New Pictures
Baseline Sampling 46.4% 0.2 45.0% 0.1
Greedy 48.2% 0.1 46.9%
BSearch 53.4% 0.0 53.0%
REINFORCE Sampling
Greedy 58.6% 0.0 57.5%
BSearch 54.3% 0.1 53.2%
Table 2: Accuracies of the models as a percentage of human performance of the QGen trained with the baseline and REINFORCE. New objects refers to uniformly sampling objects within the training set, while new pictures refer to the test set.

Samples

We qualitatively compare the two methods by analyzing a few generated samples, as shown in Table 1. We observe that the beam-search baseline trained in a supervised fashion keeps repeating the same questions, as can be seen in the two top examples in Tab. 1. We noticed this behavior especially on the test set i.e. when confronted with unseen pictures, which may highlight some generalization issues. We also find that the beam-search baseline generates longer questions ( tokens on average) compared to REINFORCE ( tokens on average). This qualitative difference is clearly visible in the bottom-left example, which also highlights that the supervised baseline sometimes generates visually relevant but incoherent sequences of questions. For instance, asking ”Is it the one to the right of the girl in?” is not a very logical follow-up of ”Is it the one in the middle with the red umbrella?”. In contrast, REINFORCE seem to implement a more grounded and relevant strategy: ”Is it girl in white?” is a reasonable follow-up to ”Is it a person?”. In general, we observe that REINFORCE favor enumerating object categories (”is it a person?”) or absolute spatial information (”Is it left?”). Note these are also the type of questions that the oracle is expected to answer correctly, hence, REINFORCE is able to tailor its strategy towards the strengths of the oracle.

Dialogue length

For the REINFORCE trained QGen, we investigate the impact of the dialogue length on the success ratio in Fig. 5. Interestingly, REINFORCE learns to stop on average after questions, although we did not encode a question penalty into the reward function. This policy may be enforced by the guesser since asking additional but noisy questions greatly lower the prediction accuracy of the guesser as shown in Tab. 1. Therefore, the QGen learns to stop asking questions when a dialogue contains enough information to retrieve the target object. However, we observe that the QGen sometimes stops too early, especially when the image contains too many objects of the same category. Interestingly, we also found that the beam-search fails to stop the dialogue. Beam-search uses a length-normalized log-likelihood to score candidate sequences to avoid a bias towards shorter questions. However, questions in GuessWhat?! almost always start with ”is it”, which increases the average log likelihood of a question significantly. The score of a new question might thus (almost) always be higher than emitting a single token. Our finding was further confirmed by the fact that a sampling procedure did stop the dialogue.

Vocabulary

Sampling from the supervised baseline on the test set results in 2,893 unique words, while sampling from the REINFORCE trained model reduces its size to 1,194. However, beam search only uses 512 unique words which is consistent with the observed poor variety of questions.

7 Conclusion

In this paper, we proposed to build a training environment from supervised deep learning baselines in order to train a DeepRL agent to solve a goal-oriented multi-modal dialogue task. We show the promise of this approach on the GuessWhat?! dataset, and observe quantitatively and qualitatively an encouraging improvement over a supervised baseline model. While supervised learning models fail to generate a coherent dialogue strategy, our method learns when to stop after generating a sequence of relevant questions.

Acknowledgement

The authors would like to acknowledge the stimulating environment provided by the SequeL labs. We acknowledge the following agencies for research funding and computing support: CHISTERA IGLU and CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020, NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR.

References