DeepAI
Log In Sign Up

Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images

07/19/2021
by   Nyoungwoo Lee, et al.
1

In multi-modal dialogue systems, it is important to allow the use of images as part of a multi-turn conversation. Training such dialogue systems generally requires a large-scale dataset consisting of multi-turn dialogues that involve images, but such datasets rarely exist. In response, this paper proposes a 45k multi-modal dialogue dataset created with minimal human intervention. Our method to create such a dataset consists of (1) preparing and pre-processing text dialogue datasets, (2) creating image-mixed dialogues by using a text-to-image replacement technique, and (3) employing a contextual-similarity-based filtering step to ensure the contextual coherence of the dataset. To evaluate the validity of our dataset, we devise a simple retrieval model for dialogue sentence prediction tasks. Automatic metrics and human evaluation results on such tasks show that our dataset can be effectively used as training data for multi-modal dialogue systems which require an understanding of images and text in a context-aware manner. Our dataset and generation code is available at https://github.com/shh1574/multi-modal-dialogue-dataset.

READ FULL TEXT

page 1

page 2

page 8

page 10

11/10/2022

MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation

Responding with multi-modal content has been recognized as an essential ...
09/27/2021

OpenViDial 2.0: A Larger-Scale, Open-Domain Dialogue Generation Dataset with Visual Contexts

In order to better simulate the real human conversation process, models ...
07/05/2022

Scene-Aware Prompt for Multi-modal Dialogue Understanding and Generation

This paper introduces the schemes of Team LingJing's experiments in NLPC...
05/23/2022

META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI

Task-oriented dialogue (TOD) systems have been widely used by mobile pho...
11/23/2016

GuessWhat?! Visual object discovery through multi-modal dialogue

We introduce GuessWhat?!, a two-player guessing game as a testbed for re...
03/28/2020

Semantically Multi-modal Image Synthesis

In this paper, we focus on semantically multi-modal image synthesis (SMI...
04/21/2021

Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings

Reliability of machine learning (ML) systems is crucial in safety-critic...