Improving Context Modelling in Multimodal Dialogue Generation

10/20/2018 ∙ by Shubham Agarwal, et al. ∙ Heriot-Watt University 0

In this work, we investigate the task of textual response generation in a multimodal task-oriented dialogue system. Our work is based on the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017) in the fashion domain. We introduce a multimodal extension to the Hierarchical Recurrent Encoder-Decoder (HRED) model and show that this extension outperforms strong baselines in terms of text-based similarity metrics. We also showcase the shortcomings of current vision and language models by performing an error analysis on our system's output.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This work aims to learn strategies for textual response generation in a multimodal conversation directly from data. Conversational AI has great potential for online retail: It greatly enhances user experience and in turn directly affects user retention chai2001natural

, especially if the interaction is multi-modal in nature. So far, most conversational agents are uni-modal – ranging from open-domain conversation

ram2018conversational; papaioannou2017alana; fang2017sounding to task oriented dialogue systems rieser2010natural; rieser2011reinforcement; young2013pomdp; singh2000reinforcement; wen2016network

. While recent progress in deep learning has unified research at the intersection of vision and language, the availability of open-source multimodal dialogue datasets still remains a bottleneck.

This research makes use of a recently released Multimodal Dialogue (MMD) dataset saha2017multimodal, which contains multiple dialogue sessions in the fashion domain. The MMD dataset provides an interesting new challenge, combining recent efforts on task-oriented dialogue systems, as well as visually grounded dialogue. In contrast to simple QA tasks in visually grounded dialogue, e.g. antol2015vqa, it contains conversations with a clear end-goal. However, in contrast to previous slot-filling dialogue systems, e.g. rieser2011reinforcement; young2013pomdp, it heavily relies on the extra visual modality to drive the conversation forward (see Figure 1).

In the following, we propose a fully data-driven response generation model for this task. Our work is able to ground the system’s textual response with language and images by learning the semantic correspondence between them while modelling long-term dialogue context.

Figure 1: Example of a user-agent interaction in the fashion domain. In this work, we are interested in the textual response generation for a user query. Both user query and agent response can be multimodal in nature.

2 Model: Multimodal HRED over multiple images

Our model is an extension of the recently introduced Hierarchical Recurrent Encoder Decoder (HRED) architecture serban2016building; serban2017hierarchical; lu2016hierarchical. In contrast to standard sequence-to-sequence models cho2014learning; sutskever2014sequence; bahdanau2014neural

, HREDs model the dialogue context by introducing a context Recurrent Neural Network (RNN) over the encoder RNN, thus forming a hierarchical encoder.

We build on top of the HRED architecture to include multimodality over multiple images. A simple HRED consists of three RNN modules: encoder, context and decoder. In multimodal HRED, we combine the output representations from the utterance encoder with concatenated multiple image representations and pass them as input to the context encoder (see Figure 2). A dialogue is modelled as a sequence of utterances (turns), which in turn are modelled as sequences of words and images. Formally, a dialogue is generated according to the following:

(1)

where is the -th utterance in a dialogue. For each , we have hidden states of each module defined as:

(2)
(3)
(4)
(5)
(6)

where , and are GRU cells cho2014learning. represent model parameters, is the -th word in the -th utterance and

is a Convolutional Neural Network (CNN); here we use VGGnet

simonyan2014very. We pass multiple images in a context through the CNN in order to get encoded image representations . Then these are combined together and passed through a linear layer to get the aggregated image representation for one turn of context, denoted by above. The textual representation is given by the encoder RNN . Both and are subsequently concatenated and passed as input to the context RNN. , the final hidden state of the context RNN, acts as the initial hidden state of the decoder RNN. Finally, output is generated by passing through an affine transformation followed by a softmax activation. The model is trained using cross entropy on next-word prediction. During generation, the decoder conditions on the previous output token.

saha2017multimodal propose a similar baseline model for the MMD dataset, extending HREDs to include the visual modality. However, for simplicity’s sake, they ‘unroll’ multiple images in a single utterance to include only one image per utterance. While computationally leaner, this approach ultimately loses the objective of capturing multimodality over the context of multiple images and text. In contrast, we combine all the image representations in the utterance using a linear layer. We argue that modelling all images is necessary to answer questions that address previous agent responses. For example in Figure 3, when the user asks “what about the 4th image?”, it is impossible to give a correct response without reasoning over all images in the previous response. In the following, we empirically show that our extension leads to better results in terms of text-based similarity measures, as well as quality of generated dialogues.

Figure 2: The Multimodal HRED architecture consists of four modules: utterance encoder, image encoder, context encoder and decoder. While saha2017multimodal ‘rollout’ images to encode only one image per context, we concatenate all the ‘local’ representations to form a ‘global’ image representation per turn. Next, we concatenate the encoded text representation and finally everything gets fed to the context encoder.
Our version of the dataset
Text Context: Sorry i don’t think i have any 100 % acrylic but i can show
you in knit Show me something similar to the 4th image but with the
material different
Image Context: [Img 1, Img 2, Img 3, Img 4, Img 5] [0, 0, 0, 0, 0]
Target Response: The similar looking ones are
Saha et al. saha2017multimodal
Text Context:
Image Context: Img 4 Img 5
Target Response: The similar looking ones are
Figure 3: Example contexts for a given system utterance; note the difference in our approach from saha2017multimodal when extracting the training data from the original chat logs. For simplicity, in this illustration we consider a context size of 2 previous utterances. ‘

’ differentiates turns for a given context. We concatenate the representation vector of all images in one turn of a dialogue to form the image context. If there is no image in the utterance, we consider a

vector to form the image context. In this work, we focus only on the textual response of the agent.

3 Experiments and Results

3.1 Dataset

The MMD dataset saha2017multimodal consists of 100/11/11k train/validation/test chat sessions comprising 3.5M context-response pairs for the model. Each session contains an average of 40 dialogue turns (average of 8 words per textual response, 4 images per image response). The data contains complex user queries, which pose new challenges for multimodal, task-based dialogue, such as quantitative inference (sorting, counting and filtering): “Show me more images of the 3rd product in some different directions”, inference using domain knowledge and long term context: “Will the 5th result go well with a large sized messenger bag?”, inference over aggregate of images: “List more in the upper material of the 5th image and style as the 3rd and the 5th”, co-reference resolution. Note that we started with the raw transcripts of dialogue sessions to create our own version of the dataset for the model. This is done since the authors originally consider each image as a different context, while we consider all the images in a single turn as one concatenated context (cf. Figure 3).

3.2 Implementation

We use the PyTorch

111https://pytorch.org/ framework  paszke2017automatic for our implementation.222Our code is freely available at:
https://github.com/shubhamagarwal92/mmd
We used 512 as the word embedding size as well as hidden dimension for all the RNNs using GRUs cho2014learning with tied embeddings for the (bi-directional) encoder and decoder. The decoder uses Luong-style attention mechanism luong2015effective with input feeding. We trained our model with the Adam optimizer kingma2014adam, with a learning rate of 0.0004 and clipping gradient norm over 5. We perform early stopping by monitoring validation loss. For image representations, we use the FC6 layer representations of the VGG-19 simonyan2014very

, pre-trained on ImageNet.

333In future, we plan to exploit state-of-the-art frameworks such as ResNet or DenseNet and fine tune the image encoder jointly, during the training of the model.

Figure 4: Examples of predictions using M-HRED–attn (5). Recall, we are focusing on generating textual responses. Our model predictions are shown in blue while the true gold target in red. We are showing only the previous user utterance for brevity’s sake.

3.3 Analysis and Results

We report sentence-level Bleu-4 papineni2002bleu, Meteor lavie2007meteor and Rouge-L lin2004automatic using the evaluation scripts provided by sharma2017nlgeval. We compare our results against saha2017multimodal by using their code and data-generation scripts.444https://github.com/amritasaha1812/MMD_Code Note that the results reported in their paper are on a different version of the corpus, hence not directly comparable.

Model Cxt Bleu-4 Meteor Rouge-L
Saha et al. M-HRED* 2 0.3767 0.2847 0.6235
T-HRED 2 0.4292 0.3269 0.6692
M-HRED 2 0.4308 0.3288 0.6700
T-HRED–attn 2 0.4331 0.3298 0.6710
M-HRED–attn 2 0.4345 0.3315 0.6712
T-HRED–attn 5 0.4442 0.3374 0.6797
M-HRED–attn 5 0.4451 0.3371 0.6799
Table 1: Sentence-level Bleu-4, METEOR and ROUGE-L results for the response generation task on the MMD corpus. “Cxt” represents context size considered by the model. Our best performing model is M-HRED–attn over a context of 5 turns. *Saha et al. has been trained on a different version of the dataset.

Table 1 provides results for different configurations of our model (“T” stands for text-only in the encoder, “M” for multimodal, and “attn” for using attention in the decoder). We experimented with different context sizes and found that output quality improved with increased context size (models with 5-turn context perform better than those with a 2-turn context), confirming the observation by serban2016building,serban2017hierarchical.555Using pairwise bootstrap resampling test koehn2004statistical, we confirmed that the difference of M-HRED-attn (5) vs. M-HRED-attn (2) is statistically significant at 95% confidence level. Using attention clearly helps: even T-HRED–attn outperforms M-HRED (without attention) for the same context size. We also tested whether multimodal input has an impact on the generated outputs. However, there was only a slight increase in BLEU score (M-HRED–attn vs T-HRED–attn).

To summarize, our best performing model (M-HRED–attn) outperforms the model of Saha et al. by 7 BLEU points.666The difference is statistically significant at 95% confidence level according to the pairwise bootstrap resampling test koehn2004statistical. This can be primarily attributed to the way we created the input for our model from raw chat logs, as well as incorporating more information during decoding via attention. Figure 4 provides example output utterances using M-HRED–attn with a context size of 5. Our model is able to accurately map the response to previous textual context turns as shown in (a) and (c). In (c), it is able to capture that the user is asking about the style in the 1st and 2nd image. (d) shows an example where our model is able to relate that the corresponding product is ‘jeans’ from visual features, while it is not able to model fine-grained details like in (b) that the style is ‘casual fit’ but resorts to ‘woven’.

4 Conclusion and Future Work

In this research, we address the novel task of response generation in search-based multimodal dialogue by learning from the recently released Multimodal Dialogue (MMD) dataset saha2017multimodal. We introduce a novel extension to the Hierarchical Recurrent Encoder-Decoder (HRED) model serban2016building and show that our implementation significantly outperforms the model of saha2017multimodal by modelling the full multimodal context. Contrary to their results, our generation outputs improved by adding attention and increasing context size. However, we also show that multimodal HRED does not improve significantly over text-only HRED, similar to observations by agrawal2016analyzing and qian2018multimodal. Our model learns to handle textual correspondence between the questions and answers, while mostly ignoring the visual context. This indicates that we need better visual models to encode the image representations when he have multiple similar-looking images, e.g., black hats in Figure 3. We believe that the results should improve with a jointly trained or fine-tuned CNN for generating the image representations, which we plan to implement in future work.

Acknowledgments

This research received funding from Adeptmind Inc., Toronto, Canada and the MaDrIgAL EPSRC project (EP/N017536/1). The Titan Xp used for this work was donated by the NVIDIA Corp.

References