Quantifying and Alleviating the Language Prior Problem in Visual Question Answering

05/13/2019
by   Yangyang Guo, et al.
0

Benefiting from the advancement of computer vision, natural language processing and information retrieval techniques, visual question answering (VQA), which aims to answer questions about an image or a video, has received lots of attentions over the past few years. Although some progress has been achieved so far, several studies have pointed out that current VQA models are heavily affected by the language prior problem, which means they tend to answer questions based on the co-occurrence patterns of question keywords (e.g., how many) and answers (e.g., 2) instead of understanding images and questions. Existing methods attempt to solve this problem by either balancing the biased datasets or forcing models to better understand images. However, only marginal effects and even performance deterioration are observed for the first and second solution, respectively. In addition, another important issue is the lack of measurement to quantitatively measure the extent of the language prior effect, which severely hinders the advancement of related techniques. In this paper, we make contributions to solve the above problems from two perspectives. Firstly, we design a metric to quantitatively measure the language prior effect of VQA models. The proposed metric has been demonstrated to be effective in our empirical studies. Secondly, we propose a regularization method (i.e., score regularization module) to enhance current VQA models by alleviating the language prior problem as well as boosting the backbone model performance. The proposed score regularization module adopts a pair-wise learning strategy, which makes the VQA models answer the question based on the reasoning of the image (upon this question) instead of basing on question-answer patterns observed in the biased training set. The score regularization module is flexible to be integrated into various VQA models.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 6

page 7

page 8

page 11

research
10/05/2016

Visual Question Answering: Datasets, Algorithms, and Future Challenges

Visual Question Answering (VQA) is a recent problem in computer vision a...
research
09/14/2017

Robustness Analysis of Visual QA Models by Basic Questions

Visual Question Answering (VQA) models should have both high robustness ...
research
06/21/2016

Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions

Visual Question Answering (VQA) is the task of answering natural-languag...
research
05/05/2021

AdaVQA: Overcoming Language Priors with Adapted Margin Cosine Loss

A number of studies point out that current Visual Question Answering (VQ...
research
10/30/2020

Loss-rescaling VQA: Revisiting Language Prior Problem from a Class-imbalance View

Recent studies have pointed out that many well-developed Visual Question...
research
07/24/2022

Visual Perturbation-aware Collaborative Learning for Overcoming the Language Prior Problem

Several studies have recently pointed that existing Visual Question Answ...
research
10/04/2016

Tutorial on Answering Questions about Images with Deep Learning

Together with the development of more accurate methods in Computer Visio...

Please sign up or login with your details

Forgot password? Click here to reset