Structured Attentions for Visual Question Answering

08/07/2017
by   Chen Zhu, et al.
0

Visual attention, which assigns weights to image regions according to their relevance to a question, is considered as an indispensable part by most Visual Question Answering models. Although the questions may involve complex relations among multiple regions, few attention models can effectively encode such cross-region relations. In this paper, we demonstrate the importance of encoding such relations by showing the limited effective receptive field of ResNet on two datasets, and propose to model the visual attention as a multivariate distribution over a grid-structured Conditional Random Field on image regions. We demonstrate how to convert the iterative inference algorithms, Mean Field and Loopy Belief Propagation, as recurrent layers of an end-to-end neural network. We empirically evaluated our model on 3 datasets, in which it surpasses the best baseline model of the newly released CLEVR dataset by 9.5 is available at https: //github.com/zhuchen03/vqa-sva.

READ FULL TEXT

page 1

page 7

page 8

research
11/18/2017

Co-attending Free-form Regions and Detections with Multi-modal Multiplicative Feature Embedding for Visual Question Answering

Recently, the Visual Question Answering (VQA) task has gained increasing...
research
02/03/2017

Structured Attention Networks

Attention networks have proven to be an effective approach for embedding...
research
02/25/2019

MUREL: Multimodal Relational Reasoning for Visual Question Answering

Multimodal attentional networks are currently state-of-the-art models fo...
research
04/05/2022

SwapMix: Diagnosing and Regularizing the Over-Reliance on Visual Context in Visual Question Answering

While Visual Question Answering (VQA) has progressed rapidly, previous w...
research
06/25/2019

Deep Modular Co-Attention Networks for Visual Question Answering

Visual Question Answering (VQA) requires a fine-grained and simultaneous...
research
01/10/2020

In Defense of Grid Features for Visual Question Answering

Popularized as 'bottom-up' attention, bounding box (or region) based vis...
research
11/27/2020

Point and Ask: Incorporating Pointing into Visual Question Answering

Visual Question Answering (VQA) has become one of the key benchmarks of ...

Please sign up or login with your details

Forgot password? Click here to reset