Learning Goal-Oriented Visual Dialog Agents: Imitating and Surpassing Analytic Experts

07/24/2019
by   Yen-Wei Chang, et al.
National Chiao Tung University
0

This paper tackles the problem of learning a questioner in the goal-oriented visual dialog task. Several previous works adopt model-free reinforcement learning. Most pretrain the model from a finite set of human-generated data. We argue that using limited demonstrations to kick-start the questioner is insufficient due to the large policy search space. Inspired by a recently proposed information theoretic approach, we develop two analytic experts to serve as a source of high-quality demonstrations for imitation learning. We then take advantage of reinforcement learning to refine the model towards the goal-oriented objective. Experimental results on the GuessWhat?! dataset show that our method has the combined merits of imitation and reinforcement learning, achieving the state-of-the-art performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/15/2019

VILD: Variational Imitation Learning with Diverse-quality Demonstrations

The goal of imitation learning (IL) is to learn a good policy from high-...
08/21/2020

Adversarial Imitation Learning via Random Search

Developing agents that can perform challenging complex tasks is the goal...
04/17/2020

Show Us the Way: Learning to Manage Dialog from Demonstrations

We present our submission to the End-to-End Multi-Domain Dialog Challeng...
02/22/2019

Large-Scale Answerer in Questioner's Mind for Visual Dialog Question Generation

Answerer in Questioner's Mind (AQM) is an information-theoretic framewor...
11/16/2018

An Algorithmic Perspective on Imitation Learning

As robots and other intelligent agents move from simple environments and...
10/02/2018

Efficient Dialog Policy Learning via Positive Memory Retention

This paper is concerned with the training of recurrent neural networks a...
08/01/2019

Curiosity-driven Reinforcement Learning for Diverse Visual Paragraph Generation

Visual paragraph generation aims to automatically describe a given image...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Research on goal-oriented visual dialogue [1, 5] has recently attracted lots of attention. Unlike the conventional VQA [8], where the robot answerer has to answer any question related to an input image raised by a human even if the question itself is ambiguous or indefinite, the goal-oriented visual dialogue extends the question-answering interactions to multiple rounds, turning the robot into also a questioner which can retrieve more information from the human. This challenging task calls for the robot’s ability to reason over both visual scenes and textual dialogue history.

For developing such goal-oriented visual dialogue systems, GuessWhat?! is one commonly used dataset. It specifies a goal-oriented 2-player object guessing game, where a questioner (a robot) tries to figure out the target object in an image chosen by an answerer (normally a human). The questioner must learn to ask critical questions to the answerer in order to get useful information for identifying the target object. A real-life GuessWhat?! scenario is illustrated in Fig. 1.

Some previous works [3, 7] address this GuessWhat?! task by applying reinforcement learning (RL) to train the questioner. The process often involves pretraining the questioner from a finite set of human-generated data. However, due to the large policy search space, they struggle with reaching satisfactory performance.

Figure 1: Illustration of a real-life GuessWhat?! scenario where a boy who is new to the baseball game asks an ambiguous question “What is the man doing?” and the robot tries to figure out the man the boy is talking about by asking back discriminative questions.

Lately, an information theoretic approach, Answerer in Questioner’s Mind (AQM) [9], is proposed. Unlike the RL-based methods, AQM constructs an approximate model of the answerer so that it can be queried by the questioner to predict what question would induce an answer that minimizes the uncertainty about the target object. Although AQM outperforms similar prior works by a large margin, its performance is highly variable depending on how accurate the approximate model is.

In this work, we leverage the merits of both approaches. We make use of the probabilistic framework in [9] to devise an Information Gain Expert and a Target Posterior Expert. These experts are queried to provide virtually unlimited expert demonstrations for pretraining the questioner. Since they are not perfect experts, we refine the model using the REINFORCE algorithm to discover a even better policy. Extensive experiments confirm the superiority of our method over several state-of-the-art baselines in terms of prediction accuracy and robustness to the model approximation error.

2 Related Work

Goal-Oriented Visual Dialogue. GuessWhat?! [5] is a collaborative 2-player visual grounded object discovery game. The game begins with presenting an image of a rich visual scene containing objects to both players, the questioner and the answerer. The answerer first picks in mind an object , which is unknown to the questioner. The questioner then tries to identify the target object by asking a yes-no question to the answerer, who responds with an answer of yes, no or not applicable. The process continues for iterations, forming a QA history . Once the questioner finishes asking questions, the list of candidate objects is finally shown to the questioner. The questioner then makes a guess of the target object based on . The game is considered successful if the target object is correctly identified .

Model-Free Reinforcement Learning (RL). To train a questioner for solving the GuessWhat?! game, [3, 7] construct an “oracle” network to mimic the answerer’s behavior, regard it as part of the environment in the reinforcement learning setup and then apply the REINFORCE algorithm (or Monte Carlo Policy Gradient). The questioner learns to ask critical questions that help identify the target object by interacting with the oracle. In the training process, the questioner is rewarded when it successfully identifies the target object after few QA iterations. The term “model-free” refers to the fact that the questioner has no knowledge of the oracle model. Typically, for converging quickly to a good policy, the questioner is pretrained supervisedly based on human-generated data.

Answerer in Questioner’s Mind (AQM). AQM [9] adopts an information-theoretic approach to design the questioner. Unlike the RL-based methods, AQM has an objective of asking a question in each QA round that would induce an answer minimizing the uncertainty about the target object. This notion is formalized as asking a question that maximizes the conditional mutual information between the target object and the answer. The process involves the modeling of a posterior distribution over the candidate objects at each iteration given the question , the previous QA history and the image as well as the modeling of the oracle’s behavior, characterized by the conditional distribution 111In AQM, the oracle is implemented in such a way that the answer is conditionally independent of the QA history and given the target object and the current question ; that is, . of its response , where is the target object. Since the true oracle in reality is not accessible, AQM calls for an approximate model of the oracle, explaining the origin of its name “Answerer in Questioner’s Mind”. Currently, AQM uses a selection of predefined questions instead of an open question set. It achieves the state-of-the-art performance in the GuessWhat?! game.

Imitation Learning (IL). In IL [4, 10, 11, 12], the learner tries to achieve the best performance by mimicking the expert’s moves. In learning a questioner for the GuessWhat?! game, it is important to pretrain the model (the learner) supervisedly based on human-generated data, which is a form of imitation learning. However, applying direct behavior cloning by learning supervisedly from a finite set of the expert’s demonstrations is impractical since the learner only learns how to behave in states that have been visited by the expert. It may fail to generalize to states never seen by the expert, leading to the so called cascading error in the long run, i.e. the state trajectories traversed by the learner may end up deviating significantly from those traversed by the expert. To mitigate the cascading error, [10] suggests that the learner should also learn the expert’s actions over states visited by the learner itself in addition to those visited by the expert. However, this requires an expert, e.g. a human subject, to be queried whenever needed, which is often impractical.

Figure 2: The MDP formulation of the GuessWhat?! game.

3 Method

To overcome the problems faced by these previous works, our proposed method obtains a better questioner by learning from analytic experts, which provide virtually unlimited demonstrations, and by taking advantage of RL to discover a even better policy than the experts’, which suffer inherently from imperfect modeling of the oracle. In this session, we give a formal treatment of the proposed method, starting from formalizing the GuessWhat?! game as a Markov decision process to learning the questioner by imitation learning and policy gradient, and ending with implementation details.

3.1 GuessWhat?! as A Markov Decision Process

We formalize the GuessWhat?! game as a Markov decision process (MDP). We start by viewing the questioner as a two-part task comprising (1) a question generator, which is to ask a question in the -th QA round based on the previous dialog history and the given image , and (2) a guesser, which makes a guess of the target object by observing the list of candidate objects , the dialog history and the image . From this perspective, the interaction between the questioner and the oracle (or the answerer) in rounds of QA can be formalized as a MDP :

1:
2:-sampler
3:A pretrained oracle
4: Random Initialization
5:, Experts
6:procedure Imitation Learning (DAgger [10])
7:     Initialize dataset
8:     for  to  do
9:          Random
10:         for  to  do
11:              
12:              
13:              Sample
14:              Sample # expert’s move
15:              Aggregate into dataset
16:              Sample
17:              State evolves          
18:

         Train classifier

on      
19:
20:An approximate oracle
21:A pretrained from imitation learning
22:procedure REINFORCE
23:     for  to  do
24:          Random
25:         for  to  do
26:              
27:              Sample
28:              Sample
29:              State evolves          
30:         Episodic reward
31:         if  then
32:                        
33:         Compute (cf. Eq. (4))
34:         Update      
Algorithm 1 Training of Question Generator
  • represents the state of the MDP. We define the state in the -th QA round to be composed of the previous QA history , the target object , and the image .

  • denotes the action taken by the questioner (or the agent in RL language) in state . In the present case, the action is the question output by the question generator.

  • is the state transition probability

    , which is uniquely determined by the oracle’s behavior, .

  • is the reward signal given to the questioner. We denote the immediate reward for the questioner taking action in state in the -th QA round by . Since the game terminates after QA rounds, the interaction between the questioner and the oracle forms a state-action-reward sequence . The cumulative reward is seen to be .

It is worth noting that the state is not fully visible. According to the rules of the game, the questioner does not have access to the target object . In this sense, the MDP is partially observable. We then have the question generator governed by ; in other words, the question has a distribution that depends solely on the visible observations . As will be seen shortly, we further implement the question generator

by a neural network

parameterized by .

3.2 Learning

The learning of the question generator is done in two sequential phases, the imitation learning (IL) phase and the reinforcement learning (RL) phase. The former is to pretrian the question generator by imitating experts, while the latter is to discover a better policy than the experts’. In the IL phase, we get access to the true oracle to construct two analytic experts to solve the game and pretrain the question generator through imitating these experts. In the RL phase, we further equip it with an approximate oracle and apply RL to direct the learning objective towards successful identification of the target object. The following expand on these two learning phases.

3.2.1 Imitation Learning from Analytic Experts

For IL, we assume that we have full knowledge of the oracle (or the environment in RL language). That is, we have control over the target object and access to the oracle’s behavior , which implies that the state transition probability of the MDP is known. This allows us to produce virtually unlimited demonstrations by having any expert or planner interact with the oracle. To automate the generation process, we introduce two analytic experts as follows.

Information Gain Expert (IGE). Inspired by AQM  [9], IGE calculates in each QA round the information gain , defined as the conditional mutual information between the object and the answer given the question , the QA history and the image , for every question in and chooses the one with the maximum information gain to be its action:

(1)

where the posterior distribution is iteratively updated following

(2)

The last proportionality is because the question generator has no access to , i.e. , a constraint that has been discussed in Section 3.1.

It is interesting to see that evaluating the information gain in Eq. (1) and the posterior distribution in Eq. (2) involves the oracle model, . Since the true oracle is unavailable in practice, an approximate model of the oracle is used instead. It is the approximation error that crucially affects the performance of IGE (and also AQM) at test time.

(a) Question generator model. (b) Oracle model.
Figure 3: The question generator and the oracle in our method.

Target Posterior Expert (TPE). TPE makes use of the knowledge about the target object to choose a question

that maximizes the following posterior probability:

(3)

where .

Note that the posterior distribution is also queried by the guesser inside the questioner to make a guess of the target object by .

To sum up, both IGE and TPE are not perfect due to the need to incorporate an approximate oracle model for estimating the information gain and for making a guess. Moreover, they are greedy in that they amount to performing one-step dynamic programming. Recognizing their limitations, we put them into use in the IL framework of DAgger 

[10] for the sheer purpose of pretraining our question generator (see the imitation learning part of Algorithm 1).

3.2.2 Reinforcement Learning with Policy Gradient

The RL phase is to refine the pretrained question generator to optimize for successful identification of the target object. We view the questioner, including the question generator and the guesser, as the agent and the true oracle as the environment.

To train the question generator part of the agent, we adopt REINFORCE [6], one of the on-policy policy gradient methods, because it scales well to large action spaces and can be applied to partially observable environments such as our case. Since the GuessWhat?! game is an episodic game, the objective is to find a question generation policy that maximizes the expected cumulative reward:

(4)

The policy can be improved by performing gradient ascent [6]. The details are presented in Algorithm 1 (see the REINFORCE part).

3.3 Implementation

Question Generator. We model the question generator by a 5-layer fully connected neural network conditioned on the concatenation of VGG16 FC8 feaures of the image and the last hidden state of a LSTM encoding the QA history , as illustrated in Fig. 3 (a).

Oracle Network. We adopt the same oracle network as proposed in [3], where the concatenation of the spatial and categorical information of the target object together with the last state of the LSTM which encodes the question is fed into a 2-layer fully connected neural network, as illustrated in Fig. 3 (b). The test error of our oracle implementation is 21.33%. Note that with such an implementation the answer is conditionally independent of the previous QA history and the image, i.e. .

(a) Previous RL methods. (b) Ours.
Figure 4: Comparison of the questioner designs.
trueA depA indA Strub et al. [3] Zhao et al. [7] Human [3]
AQM [9] IGE Ours AQM [9] IGE Ours AQM [9] IGE Ours
1q - 32.78 31.92 - 31.72 31.39 - 32.29 31.15 58.4 74.31 84.4
2q - 56.26 48.25 - 54.71 47.58 - 52.83 45.65
3q 63.76 70.71 60.18 63.63 70.23 55.73 - 63.91 54.26
4q - 77.32 68.27 - 76.60 67.77 - 67.94 64.27
5q - 80.00 69.32 72.89 78.58 68.65 66.73 69.77 66.95
6q - 81.03 75.86 - 79.22 75.55 - 70.40 71.52
7q - 81.56 78.07 - 79.29 77.26 - 70.94 73.60
8q - 81.79 80.96 - 79.44 79.13 - 71.15 74.85
9q - 82.09 81.65 - 79.71 80.49 - 71.26 75.83
10q 81.96 82.24 82.45 78.72 79.71 80.68 - 71.43 76.31
Table 1: The testing prediction accuracy (in percentage) on GuessWhat?! dataset. Numbers are highlighted in boldface when our method outperforms IGE.

4 Experiment

This session compares the proposed method with AQM, IGE, and few other state-of-the-art baselines on the GuessWhat?! dataset, in terms of prediction accuracy. We follow the settings in [9] to test the robustness of different methods to the oracle approximation error. We conclude with a subjective evaluation.

4.1 Settings

Dataset. GuessWhat?! dataset consists of 155K dialogues including 822K QA pairs on 67K unique images and 134K unique objects. The dataset is split randomly by assigning 70%, 15% and 15% of the images and their corresponding dialogues into the training, validation and test set, respectively.

Q-Sampler. We use the countQ strategy described in [9] to sample 200 questions from the training data into . These questions are selected to be independent of each other in the sense that the probability of the answers to any two distinct questions being identical is no more than 0.95. We additionally add an extra requirement that the selected questions should appear at least 3 times in the training data.

Approximate Oracle. To investigate how the non-ideal oracle model adopted by the questioner would affect prediction accuracy, we follow [9] to conduct experiments for 3 different approximate oracles: (1) indA, which is trained from the training data, (2) depA, which is trained from the images and questions in the training data together with answers given by the true oracle, and (3) trueA, which is a copy of the true oracle. Recall that the true oracle is a model of the answerer. The depA simulates the case where the questioner builds a model of the environment, i.e. the answerer, by learning online through interacting with the answerer.

Reward. During RL training, the questioner gets no immediate rewards until it’s guesser successfully locates the target object at the end of T QA rounds, where a reward 1 is given.

4.2 Training Details

Progressive Training. For training, we first train the questioner with IL for several iterations and then refine it with RL. By observing that IGE’s performance begins to saturate in the 5 QA round, we have the questioner imitate the analytic experts in the first 5 QA rounds before entering the RL phase. In RL training, separate questioners are obtained progressively for QA rounds ranging from to . That is, the questioner to be trained for QA rounds is initialized by the parameters that have already been trained for QA rounds.

Mixture of Experts. Motivated by the intuition that human players tend to ask dividing, or binary, questions such as “Is it in the left side of the image?” in the early rounds of the game followed by more specific questions such as “It is the 3 blue vase counting from the left?” after they acquire some confidence about the target object, we arrange for the questioner to mimic IGE in the first 4 rounds and TPE in the 5 round. Compared to using IGE alone in all 5 rounds, this design with a mixture of experts turns out to give a 4.31% boost to the validation prediction accuracy after IL. The performance deteriorates too when the questioner is allowed to mimic TPE’s actions in early rounds. Since TPE exploits the information of the target object, it tends to ask identifying questions that directly relate to it, resulting in an enumerating strategy. For example, if the target object is an apple, TPE may select “is it an apple?” as its very beginning question, which becomes linear search and is generally not a good strategy.

Image Ground Truth IGE Ours
A person? Yes. A person? Yes. A person? Yes.
The left most? No. The furthest left? No. Is in right side? No.
The 2nd from left to right? Yes. Boy on left? Yes. Is it on the left side of picture? Yes.
The furthest left? No. One of the two closest to us? No.
Boy on left? Yes. On the bottom row? Yes.
Boy on left? Yes. On top shelf? No.
Boy on left? Yes. I in middle? No.
The furthest left? No. Boy on left? Yes.
Boy on left? Yes. Is it near the left edge of the photo? No.
Boy on left? Yes. Is it near the left edge of the photo? No.
Status: Success. Status: Failure. Status: Success.
Table 2: Qualitative comparison of the dialogues generated by IGE and our method in depA setting.

4.3 Experimental Results

Table 1 presents the quantitative comparison results with other state-of-the-art works in terms of testing prediction accuracy. In our proposed method, all the questioners with different approximate oracle settings are pretrained for 5 QA rounds by IL and then refined by RL progressively for QA rounds ranging from to . For completeness, we also train questioners by RL after IL without progressive training for QA rounds ranging from to .

From Table 1, several observations can be made. First, it is worth noting that at test time, IGE has access to the ground-truth list of candidate objects, thereby having better knowledge of the posterior . This explains why it outperforms AQM, which relies on a non-ideal object detector to initialize the posterior. Due to their algorithmic similarity, we consider IGE to be a stronger AQM-like baseline.

Second, when provided with the same approximate oracle, our questioner successfully surpasses its expert IGE in games with more QA rounds. More specifically, our method intersects and outperforms IGE in games with 10, 9, 6 QA rounds when the approximate oracle is trueA, depA and indA, with their respective success rates being from 82.45% to 82.24%, 80.49% to 79.71%, and 71.52% to 70.40%. This confirms that the RL training phase indeed discovers a better policy than the expert’s.

Third, it is seen that our method is more robust to the approximation error of the oracle. Recall from Section 3.2.1 that IGE needs an approximate oracle model to evaluate the information gain and that the same model will also be used by both IGE and our method to make a guess in the final QA round. We observe that the prediction accuracy of IGE drops in the 10-round game by as much as 10.81%, from 82.24% with trueA to 71.43% with indA. In contrast, our method largely mitigates this performance decline to 6.14%, suggesting that through the refinement of RL, our model can better accommodate the discrepancy between the true oracle and the approximate oracle in the questioner.

Lastly, we see that by planning only one step, IGE can already achieve closely human-level performance. That is why our method benefits only moderately from RL, the advantage of which is most obvious in solving problems requiring long-sequence, dependent decision making.

A taste of the dialogues generated by IGE and our method is given in Table 2. We see that IGE sticks to certain questions while our model asks different questions to successfully locate the target object. However, we also observe that our model makes heavy use of questions about relative positions and directions among most of the games, which may provide clues to why it needs more QA rounds to outperform IGE.

5 Conclusion

We train a questioner for the GuessWhat?! task based on imitation and reinforcement learning. We develop two analytic experts, IGE and TPE, for imitation learning on top of the probabilistic framework developed for AQM. Because both experts are greedy and have high reliance on an accurate oracle model of the answerer, we further refine our model using the REINFORCE algorithm. Our method outperforms the conventional RL-based models trained with limited expert demonstrations by a large margin, while surpassing IGE expert in games with long QA rounds in terms of prediction accuracy and robustness. To develop a questioner that is able to outperform analytic experts in games with short QA rounds and to test the performance of our method on more challenging datasets are among the scope of future works.

References

  • [1] Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M.F. Moura, Devi Parikh, and Dhruv Batra, “Visual Dialog,” in CVPR, 2017.
  • [2] Abhishek Das, Satwik Kottur, José M.F. Moura, Stefan Lee, and Dhruv Batra, “Learning cooperative visual dialog agents with deep reinforcement learning,” in ICCV, 2017.
  • [3] Florian Strub, Harm de Vries, Jérémie Mary, Bilal Piot, Aaron C. Courville, and Olivier Pietquin, “End-to-end optimization of goal-driven and visually grounded dialogue systems,” in IJCAI, 2017.
  • [4] Gregory Kahn, Tianhao Zhang, Sergey Levine, and Pieter Abbeel, “PLATO: policy learning using adaptive trajectory optimization,” in ICRA, 2017.
  • [5] Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C. Courville, “Guesswhat?! visual object discovery through multi-modal dialogue,” in CVPR, 2017.
  • [6] Ronald J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Machine Learning, vol. 8, no. 3-4, 1992.
  • [7] Rui Zhao and Volker Tresp, “Improving goal-oriented visual dialog agents via advanced recurrent nets with tempered policy gradient,” in IJCAI, 2018.
  • [8] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh, “VQA: Visual Question Answering,” in ICCV, 2015.
  • [9] Sang-Woo Lee, Yu-Jung Heo, and Byoung-Tak Zhang, “Answerer in Questioner’s Mind: Information Theoretic Approach to Goal-Oriented Visual Dialog,” in NIPS, 2018.
  • [10] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell, “A reduction of imitation learning and structured prediction to no-regret online learning,” in AISTATS, 2011.
  • [11] Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne, “Deepmimic: Example-guided deep reinforcement learning of physics-based character skills,” ACM Transactions on Graphics (Proc. SIGGRAPH 2018), vol. 37, no. 4, 2018.
  • [12] Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L Lewis, and Xiaoshi Wang,

    Deep learning for real-time atari game play using offline monte-carlo tree search planning,”

    in NIPS, 2014.