IMAD: IMage-Augmented multi-modal Dialogue

05/17/2023
by   Moskvoretskii Viktor, et al.
0

Currently, dialogue systems have achieved high performance in processing text-based communication. However, they have not yet effectively incorporated visual information, which poses a significant challenge. Furthermore, existing models that incorporate images in dialogue generation focus on discussing the image itself. Our proposed approach presents a novel perspective on multi-modal dialogue systems, which interprets the image in the context of the dialogue. By doing so, we aim to expand the capabilities of current dialogue systems and transition them from single modality (text) to multi-modality. However, there is a lack of validated English datasets that contain both images and dialogue contexts for this task. Thus, we propose a two-stage approach to automatically construct a multi-modal dialogue dataset. In the first stage, we utilize text-to-image similarity and sentence similarity to identify which utterances could be replaced with an image. In the second stage, we replace those utterances by selecting a subset of relevant images and filtering them with a visual question answering model. We used this approach, along with additional labeling, to create the IMage Augmented multi-modal Dialogue dataset (IMAD), which can serve as a validated dataset for this task. Furthermore, we propose a baseline model trained on this dataset, which outperforms model trained on the same data without images and BlenderBot.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2021

Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images

In multi-modal dialogue systems, it is important to allow the use of ima...
research
12/08/2022

DialogCC: Large-Scale Multi-Modal Dialogue Dataset

As sharing images in an instant message is a crucial factor, there has b...
research
09/27/2021

OpenViDial 2.0: A Larger-Scale, Open-Domain Dialogue Generation Dataset with Visual Contexts

In order to better simulate the real human conversation process, models ...
research
07/05/2022

Scene-Aware Prompt for Multi-modal Dialogue Understanding and Generation

This paper introduces the schemes of Team LingJing's experiments in NLPC...
research
09/14/2023

VDialogUE: A Unified Evaluation Benchmark for Visually-grounded Dialogue

Visually-grounded dialog systems, which integrate multiple modes of comm...
research
12/12/2022

Information-Theoretic Text Hallucination Reduction for Video-grounded Dialogue

Video-grounded Dialogue (VGD) aims to decode an answer sentence to a que...
research
12/11/2022

AliCHI: A Large-scale Multi-modal Dataset and Automated Evaluation Tool for Human-like Dialogue Systems

A well-designed interactive human-like dialogue system is expected to ta...

Please sign up or login with your details

Forgot password? Click here to reset