Modality-Balanced Models for Visual Dialogue

01/17/2020
by   Hyounghun Kim, et al.
22

The Visual Dialog task requires a model to exploit both image and conversational context information to generate the next response to the dialogue. However, via manual analysis, we find that a large number of conversational questions can be answered by only looking at the image without any access to the context history, while others still need the conversation context to predict the correct answers. We demonstrate that due to this reason, previous joint-modality (history and image) models over-rely on and are more prone to memorizing the dialogue history (e.g., by extracting certain keywords or patterns in the context information), whereas image-only models are more generalizable (because they cannot memorize or extract keywords from history) and perform substantially better at the primary normalized discounted cumulative gain (NDCG) task metric which allows multiple correct answers. Hence, this observation encourages us to explicitly maintain two models, i.e., an image-only model and an image-history joint model, and combine their complementary abilities for a more balanced multimodal model. We present multiple methods for this integration of the two models, via ensemble and consensus dropout fusion with shared parameters. Empirically, our models achieve strong results on the Visual Dialog challenge 2019 (rank 3 on NDCG and high balance across metrics), and substantially outperform the winner of the Visual Dialog challenge 2018 on most metrics.

READ FULL TEXT

page 1

page 10

page 11

research
05/08/2020

History for Visual Dialog: Do we really need it?

Visual Dialog involves "understanding" the dialog history (what has been...
research
02/26/2019

Image-Question-Answer Synergistic Network for Visual Dialog

The image, question (combined with the history for de-referencing), and ...
research
09/17/2021

GoG: Relation-aware Graph-over-Graph Network for Visual Dialog

Visual dialog, which aims to hold a meaningful conversation with humans ...
research
04/15/2021

Ensemble of MRR and NDCG models for Visual Dialog

Assessing an AI agent that can converse in human language and understand...
research
09/09/2019

Neural Conversational QA: Learning to Reason v.s. Exploiting Patterns

In this paper we work on the recently introduced ShARC task - a challeng...
research
05/27/2021

Maria: A Visual Experience Powered Conversational Agent

Arguably, the visual perception of conversational agents to the physical...
research
12/17/2018

From FiLM to Video: Multi-turn Question Answering with Multi-modal Context

Understanding audio-visual content and the ability to have an informativ...

Please sign up or login with your details

Forgot password? Click here to reset