P ≈ NP, at least in Visual Question Answering

03/26/2020
by   Shailza Jolly, et al.
19

In recent years, progress in the Visual Question Answering (VQA) field has largely been driven by public challenges and large datasets. One of the most widely-used of these is the VQA 2.0 dataset, consisting of polar ("yes/no") and non-polar questions. Looking at the question distribution over all answers, we find that the answers "yes" and "no" account for 38 the remaining 62 several sources of biases have already been investigated in the field, the effects of such an over-representation of polar vs. non-polar questions remain unclear. In this paper, we measure the potential confounding factors when polar and non-polar samples are used jointly to train a baseline VQA classifier, and compare it to an upper bound where the over-representation of polar questions is excluded from the training. Further, we perform cross-over experiments to analyze how well the feature spaces align. Contrary to expectations, we find no evidence of counterproductive effects in the joint training of unbalanced classes. In fact, by exploring the intermediate feature space of visual-text embeddings, we find that the feature space of polar questions already encodes sufficient structure to answer many non-polar questions. Our results indicate that the polar (P) and the non-polar (NP) feature spaces are strongly aligned, hence the expression P ≈ NP

READ FULL TEXT

page 1

page 3

page 5

page 6

research
03/13/2023

Polar-VQA: Visual Question Answering on Remote Sensed Ice sheet Imagery from Polar Region

For glaciologists, studying ice sheets from the polar regions is critica...
research
05/17/2022

Gender and Racial Bias in Visual Question Answering Datasets

Vision-and-language tasks have increasingly drawn more attention as a me...
research
09/23/2020

Multiple interaction learning with question-type prior knowledge for constraining answer search space in visual question answering

Different approaches have been proposed to Visual Question Answering (VQ...
research
02/17/2020

CQ-VQA: Visual Question Answering on Categorized Questions

This paper proposes CQ-VQA, a novel 2-level hierarchical but end-to-end ...
research
02/24/2020

On the General Value of Evidence, and Bilingual Scene-Text Visual Question Answering

Visual Question Answering (VQA) methods have made incredible progress, b...
research
11/20/2018

VQA with no questions-answers training

Methods for teaching machines to answer visual questions have made signi...
research
06/25/2018

Accounting for phenology in the analysis of animal movement

The analysis of animal tracking data provides an important source of sci...

Please sign up or login with your details

Forgot password? Click here to reset