Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering

12/01/2017
by   Aishwarya Agrawal, et al.
0

A number of studies have found that today's Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQA-CP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from 'cheating' by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model -- Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.

READ FULL TEXT

page 3

page 8

page 9

page 12

page 13

research
04/26/2017

C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset

Visual Question Answering (VQA) has received a lot of attention over the...
research
09/18/2020

MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering

While progress has been made on the visual question answering leaderboar...
research
09/18/2022

Overcoming Language Priors in Visual Question Answering via Distinguishing Superficially Similar Instances

Despite the great progress of Visual Question Answering (VQA), current V...
research
10/10/2022

Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA

Visual Question Answering (VQA) models are prone to learn the shortcut s...
research
03/14/2020

Counterfactual Samples Synthesizing for Robust Visual Question Answering

Despite Visual Question Answering (VQA) has realized impressive progress...
research
09/26/2019

Compact Trilinear Interaction for Visual Question Answering

In Visual Question Answering (VQA), answers have a great correlation wit...
research
12/04/2020

Self-Supervised VQA: Answering Visual Questions using Images and Captions

Methodologies for training VQA models assume the availability of dataset...

Please sign up or login with your details

Forgot password? Click here to reset