Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions

06/21/2016
by   Arijit Ray, et al.
0

Visual Question Answering (VQA) is the task of answering natural-language questions about images. We introduce the novel problem of determining the relevance of questions to images in VQA. Current VQA models do not reason about whether a question is even related to the given image (e.g. What is the capital of Argentina?) or if it requires information from external resources to answer correctly. This can break the continuity of a dialogue in human-machine interaction. Our approaches for determining relevance are composed of two stages. Given an image and a question, (1) we first determine whether the question is visual or not, (2) if visual, we determine whether the question is relevant to the given image or not. Our approaches, based on LSTM-RNNs, VQA model uncertainty, and caption-question similarity, are able to outperform strong baselines on both relevance tasks. We also present human studies showing that VQA models augmented with such question relevance reasoning are perceived as more intelligent, reasonable, and human-like.

READ FULL TEXT

page 1

page 5

page 7

research
05/01/2017

The Promise of Premise: Harnessing Question Premises in Visual Question Answering

In this paper, we make a simple observation that questions about images ...
research
07/23/2018

Question Relevance in Visual Question Answering

Free-form and open-ended Visual Question Answering systems solve the pro...
research
02/18/2023

Bridge Damage Cause Estimation Using Multiple Images Based on Visual Question Answering

In this paper, a bridge member damage cause estimation framework is prop...
research
12/01/2020

Open-Ended Multi-Modal Relational Reason for Video Question Answering

People with visual impairments urgently need helps, not only on the basi...
research
10/15/2021

Guiding Visual Question Generation

In traditional Visual Question Generation (VQG), most images have multip...
research
10/20/2020

SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency

Recent research in Visual Question Answering (VQA) has revealed state-of...
research
05/13/2019

Quantifying and Alleviating the Language Prior Problem in Visual Question Answering

Benefiting from the advancement of computer vision, natural language pro...

Please sign up or login with your details

Forgot password? Click here to reset