Making History Matter: Gold-Critic Sequence Training for Visual Dialog

02/25/2019
by   Tianhao Yang, et al.
0

We study the multi-round response generation in visual dialog systems, where a response is generated according to a visually grounded conversational history. Given a triplet: an image, Q&A history, and current question, all the prevailing methods follow a codec (ie, encoder-decoder) fashion in the supervised learning paradigm: a multimodal encoder encodes the triplet into a feature vector, which is then fed into the decoder for the current answer generation, supervised by the ground-truth answer. However, this conventional supervised learning does not take into account the impact of imperfect history in the codec training, violating the conversational nature of visual dialog and thus making the codec more inclined to learn dataset bias but not visual reasoning. To this end, inspired by the actor-critic policy gradient in reinforcement learning, we propose a novel training paradigm called Gold-Critic Sequence Training (GCST). Specifically, we intentionally impose wrong answers in the history, obtaining an adverse reward, and see how the historic error impacts the codec's future behavior by subtracting the gold-critic baseline --- reward obtained by using ground-truth history --- from the adverse reward. Moreover, to make the codec more sensitive to the history, we propose a novel attention network called Recurrent Co-Attention Network (RCAN) which can be effectively trained by using GCST. Experimental results on three benchmarks: VisDial0.9&1.0 and GuessWhat?!, show that the proposed GCST strategy consistently outperforms over state-of-the-art supervised counterparts under all metrics.

READ FULL TEXT
research
12/06/2018

Recursive Visual Attention in Visual Dialog

Visual dialog is a challenging vision-language task, which requires the ...
research
11/26/2016

Visual Dialog

We introduce the task of Visual Dialog, which requires an AI agent to ho...
research
02/01/2019

Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog

This paper presents Recurrent Dual Attention Network (ReDAN) for visual ...
research
05/25/2022

The Dialog Must Go On: Improving Visual Dialog via Generative Self-Training

Visual dialog (VisDial) is a task of answering a sequence of questions g...
research
04/10/2019

Actor-Critic Instance Segmentation

Most approaches to visual scene analysis have emphasised parallel proces...
research
11/17/2018

Improving Automatic Source Code Summarization via Deep Reinforcement Learning

Code summarization provides a high level natural language description of...
research
10/05/2020

A Novel Actor Dual-Critic Model for Remote Sensing Image Captioning

We deal with the problem of generating textual captions from optical rem...

Please sign up or login with your details

Forgot password? Click here to reset