Compact Tensor Pooling for Visual Question Answering

06/20/2017
by   Yang Shi, et al.
0

Performing high level cognitive tasks requires the integration of feature maps with drastically different structure. In Visual Question Answering (VQA) image descriptors have spatial structures, while lexical inputs inherently follow a temporal sequence. The recently proposed Multimodal Compact Bilinear pooling (MCB) forms the outer products, via count-sketch approximation, of the visual and textual representation at each spatial location. While this procedure preserves spatial information locally, outer-products are taken independently for each fiber of the activation tensor, and therefore do not include spatial context. In this work, we introduce multi-dimensional sketch (MD-sketch), a novel extension of count-sketch to tensors. Using this new formulation, we propose Multimodal Compact Tensor Pooling (MCT) to fully exploit the global spatial context during bilinear pooling operations. Contrarily to MCB, our approach preserves spatial context by directly convolving the MD-sketch from the visual tensor features with the text vector feature using higher order FFT. Furthermore we apply MCT incrementally at each step of the question embedding and accumulate the multi-modal vectors with a second LSTM layer before the final answer is chosen.

READ FULL TEXT

page 1

page 2

page 3

research
06/06/2016

Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding

Modeling textual or visual information with vector representations train...
research
01/31/2019

Multi-dimensional Tensor Sketch

Sketching refers to a class of randomized dimensionality reduction metho...
research
10/14/2016

Hadamard Product for Low-rank Bilinear Pooling

Bilinear models provide rich representations compared with linear models...
research
09/26/2019

Compact Trilinear Interaction for Visual Question Answering

In Visual Question Answering (VQA), answers have a great correlation wit...
research
04/06/2018

Question Type Guided Attention in Visual Question Answering

Visual Question Answering (VQA) requires integration of feature maps wit...
research
03/24/2020

Modeling Cross-view Interaction Consistency for Paired Egocentric Interaction Recognition

With the development of Augmented Reality (AR), egocentric action recogn...
research
05/18/2017

MUTAN: Multimodal Tucker Fusion for Visual Question Answering

Bilinear models provide an appealing framework for mixing and merging in...

Please sign up or login with your details

Forgot password? Click here to reset