Examining Cooperation in Visual Dialog Models

12/04/2017
by   Mircea Mironenco, et al.
0

In this work we propose a blackbox intervention method for visual dialog models, with the aim of assessing the contribution of individual linguistic or visual components. Concretely, we conduct structured or randomized interventions that aim to impair an individual component of the model, and observe changes in task performance. We reproduce a state-of-the-art visual dialog model and demonstrate that our methodology yields surprising insights, namely that both dialog and image information have minimal contributions to task performance. The intervention method presented here can be applied as a sanity check for the strength and robustness of each component in visual dialog systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/01/2018

Audio Visual Scene-Aware Dialog (AVSD) Challenge at DSTC7

Scene-aware dialog systems will be able to have conversations with users...
research
10/13/2019

Granular Multimodal Attention Networks for Visual Dialog

Vision and language tasks have benefited from attention. There have been...
research
05/29/2022

VD-PCR: Improving Visual Dialog with Pronoun Coreference Resolution

The visual dialog task requires an AI agent to interact with humans in m...
research
11/24/2019

Two Causal Principles for Improving Visual Dialog

This paper is a winner report from team MReaL-BDAI for Visual Dialog Cha...
research
01/15/2020

Ensemble based discriminative models for Visual Dialog Challenge 2018

This manuscript describes our approach for the Visual Dialog Challenge 2...
research
09/23/2019

Improving Generative Visual Dialog by Answering Diverse Questions

Prior work on training generative Visual Dialog models with reinforcemen...
research
07/06/2022

Adversarial Robustness of Visual Dialog

Adversarial robustness evaluates the worst-case performance scenario of ...

Please sign up or login with your details

Forgot password? Click here to reset