Being Negative but Constructively: Lessons Learnt from Creating Better Visual Question Answering Datasets

04/24/2017
by   Wei-Lun Chao, et al.
0

Visual question answering (QA) has attracted a lot of attention lately, seen essentially as a form of (visual) Turing test that artificial intelligence should strive to achieve. In this paper, we study a crucial component of this task: how can we design good datasets for the task? We focus on the design of multiple-choice based datasets where the learner has to select the right answer from a set of candidate ones including the target (i.e. the correct one) and the decoys (i.e. the incorrect ones). Through careful analysis of the results attained by state-of-the-art learning models and human annotators on existing datasets, we show the design of the decoy answers has a significant impact on how and what the learning models learn from the datasets. In particular, the resulting learner can ignore the visual information, the question, or the both while still doing well on the task. Inspired by this, we propose automatic procedures to remedy such design deficiencies. We apply the procedures to re-construct decoy answers for two popular visual QA datasets as well as to create a new visual QA dataset from the Visual Genome project, resulting in the largest dataset for this task. Extensive empirical studies show that the design deficiencies have been alleviated in the remedied datasets and the performance on them is likely a more faithful indicator of the difference among learning models. The datasets are released and publicly available via http://www.teds.usc.edu/website_vqa/.

READ FULL TEXT

page 2

page 11

page 12

page 16

page 17

research
06/10/2018

Learning Answer Embeddings for Visual Question Answering

We propose a novel probabilistic model for visual question answering (Vi...
research
10/26/2015

Empirical Study on Deep Learning Models for Question Answering

In this paper we explore deep learning models with memory component or a...
research
05/08/2015

Exploring Models and Data for Image Question Answering

This work aims to address the problem of image-based question-answering ...
research
06/10/2018

Cross-Dataset Adaptation for Visual Question Answering

We investigate the problem of cross-dataset adaptation for visual questi...
research
11/11/2015

Visual7W: Grounded Question Answering in Images

We have seen great progress in basic perceptual tasks such as object rec...
research
01/29/2018

Game of Sketches: Deep Recurrent Models of Pictionary-style Word Guessing

The ability of intelligent agents to play games in human-like fashion is...
research
10/29/2014

Towards a Visual Turing Challenge

As language and visual understanding by machines progresses rapidly, we ...

Please sign up or login with your details

Forgot password? Click here to reset