Overcoming Language Priors in Visual Question Answering with Adversarial Regularization

10/08/2018
by   Sainandan Ramakrishnan, et al.
0

Modern Visual Question Answering (VQA) models have been shown to rely heavily on superficial correlations between question and answer words learned during training such as overwhelmingly reporting the type of room as kitchen or the sport being played as tennis, irrespective of the image. Most alarmingly, this shortcoming is often not well reflected during evaluation because the same strong priors exist in test distributions; however, a VQA system that fails to ground questions in image content would likely perform poorly in real-world settings. In this work, we present a novel regularization scheme for VQA that reduces this effect. We introduce a question-only model that takes as input the question encoding from the VQA model and must leverage language biases in order to succeed. We then pose training as an adversarial game between the VQA model and this question-only adversary -- discouraging the VQA model from capturing language biases in its question encoding. Further,we leverage this question-only model to estimate the increase in model confidence after considering the image, which we maximize explicitly to encourage visual grounding. Our approach is a model agnostic training procedure and simple to implement. We show empirically that it can improve performance significantly on a bias-sensitive split of the VQA dataset for multiple base models -- achieving state-of-the-art on this task. Further, on standard VQA tasks, our approach shows significantly less drop in accuracy compared to existing bias-reducing VQA models.

READ FULL TEXT
research
07/13/2020

Reducing Language Biases in Visual Question Answering with Visually-Grounded Question Encoder

Recent studies have shown that current VQA models are heavily biased on ...
research
06/20/2019

Adversarial Regularization for Visual Question Answering: Strengths, Shortcomings, and Side Effects

Visual question answering (VQA) models have been shown to over-rely on l...
research
09/18/2022

Overcoming Language Priors in Visual Question Answering via Distinguishing Superficially Similar Instances

Despite the great progress of Visual Question Answering (VQA), current V...
research
04/12/2020

A negative case analysis of visual grounding methods for VQA

Existing Visual Question Answering (VQA) methods tend to exploit dataset...
research
08/28/2021

On the Significance of Question Encoder Sequence Model in the Out-of-Distribution Performance in Visual Question Answering

Generalizing beyond the experiences has a significant role in developing...
research
12/17/2020

Overcoming Language Priors with Self-supervised Learning for Visual Question Answering

Most Visual Question Answering (VQA) models suffer from the language pri...
research
06/01/2023

Overcoming Language Bias in Remote Sensing Visual Question Answering via Adversarial Training

The Visual Question Answering (VQA) system offers a user-friendly interf...

Please sign up or login with your details

Forgot password? Click here to reset