AdaVQA: Overcoming Language Priors with Adapted Margin Cosine Loss

05/05/2021
by   Yangyang Guo, et al.
0

A number of studies point out that current Visual Question Answering (VQA) models are severely affected by the language prior problem, which refers to blindly making predictions based on the language shortcut. Some efforts have been devoted to overcoming this issue with delicate models. However, there is no research to address it from the angle of the answer feature space learning, despite of the fact that existing VQA methods all cast VQA as a classification task. Inspired by this, in this work, we attempt to tackle the language prior problem from the viewpoint of the feature space learning. To this end, an adapted margin cosine loss is designed to discriminate the frequent and the sparse answer feature space under each question type properly. As a result, the limited patterns within the language modality are largely reduced, thereby less language priors would be introduced by our method. We apply this loss function to several baseline models and evaluate its effectiveness on two VQA-CP benchmarks. Experimental results demonstrate that our adapted margin cosine loss can greatly enhance the baseline models with an absolute performance gain of 15% on average, strongly verifying the potential of tackling the language prior problem in VQA from the angle of the answer feature space learning.

READ FULL TEXT

page 6

page 12

research
10/30/2020

Loss-rescaling VQA: Revisiting Language Prior Problem from a Class-imbalance View

Recent studies have pointed out that many well-developed Visual Question...
research
09/18/2022

Overcoming Language Priors in Visual Question Answering via Distinguishing Superficially Similar Instances

Despite the great progress of Visual Question Answering (VQA), current V...
research
05/13/2019

Quantifying and Alleviating the Language Prior Problem in Visual Question Answering

Benefiting from the advancement of computer vision, natural language pro...
research
06/10/2020

Estimating semantic structure for the VQA answer space

Since its appearance, Visual Question Answering (VQA, i.e. answering a q...
research
07/24/2022

Visual Perturbation-aware Collaborative Learning for Overcoming the Language Prior Problem

Several studies have recently pointed that existing Visual Question Answ...
research
05/17/2023

An Empirical Study on the Language Modal in Visual Question Answering

Generalization beyond in-domain experience to out-of-distribution data i...

Please sign up or login with your details

Forgot password? Click here to reset