Differential Attention for Visual Question Answering

04/01/2018
by   Badri Patro, et al.
0

In this paper we aim to answer questions based on images when provided with a dataset of question-answer pairs for a number of images during training. A number of methods have focused on solving this problem by using image based attention. This is done by focusing on a specific part of the image while answering the question. Humans also do so when solving this problem. However, the regions that the previous systems focus on are not correlated with the regions that humans focus on. The accuracy is limited due to this drawback. In this paper, we propose to solve this problem by using an exemplar based method. We obtain one or more supporting and opposing exemplars to obtain a differential attention region. This differential attention is closer to human attention than other image based attention methods. It also helps in obtaining improved accuracy when answering questions. The method is evaluated on challenging benchmark datasets. We perform better than other image based attention methods and are competitive with other state of the art methods that focus on both image and questions.

READ FULL TEXT

page 7

page 8

page 12

page 13

page 15

page 18

page 19

page 20

research
06/17/2016

Human Attention in Visual Question Answering: Do Humans and Deep Networks Look at the Same Regions?

We conduct large-scale studies on `human attention' in Visual Question A...
research
06/17/2016

FVQA: Fact-based Visual Question Answering

Visual Question Answering (VQA) has attracted a lot of attention in both...
research
06/29/2023

Answer Mining from a Pool of Images: Towards Retrieval-Based Visual Question Answering

We study visual question answering in a setting where the answer has to ...
research
11/07/2015

Stacked Attention Networks for Image Question Answering

This paper presents stacked attention networks (SANs) that learn to answ...
research
12/08/2018

Semantically-Aware Attentive Neural Embeddings for Image-based Visual Localization

We present a novel method for fusing appearance and semantic information...
research
08/17/2019

U-CAM: Visual Explanation using Uncertainty based Class Activation Maps

Understanding and explaining deep learning models is an imperative task....
research
10/07/2020

Vision Skills Needed to Answer Visual Questions

The task of answering questions about images has garnered attention as a...

Please sign up or login with your details

Forgot password? Click here to reset