Focal Visual-Text Attention for Visual Question Answering

06/05/2018
by   Junwei Liang, et al.
2

Recent insights on language and vision with neural networks have been successfully applied to simple single-image visual question answering. However, to tackle real-life question answering problems on multimedia collections such as personal photos, we have to look at whole collections with sequences of photos or videos. When answering questions from a large collection, a natural problem is to identify snippets to support the answer. In this paper, we describe a novel neural network called Focal Visual-Text Attention network (FVTA) for collective reasoning in visual question answering, where both visual and text sequence information such as images and text metadata are presented. FVTA introduces an end-to-end approach that makes use of a hierarchical process to dynamically determine what media and what time to focus on in the sequential data to answer the question. FVTA can not only answer the questions well but also provides the justifications which the system results are based upon to get the answers. FVTA achieves state-of-the-art performance on the MemexQA dataset and competitive results on the MovieQA dataset.

READ FULL TEXT

page 1

page 3

page 4

page 8

research
01/12/2020

Focal Visual-Text Attention for Memex Question Answering

Recent insights on language and vision with neural networks have been su...
research
11/14/2022

Multi-VQG: Generating Engaging Questions for Multiple Images

Generating engaging content has drawn much recent attention in the NLP c...
research
01/07/2016

Learning to Compose Neural Networks for Question Answering

We describe a question answering model that applies to both images and s...
research
12/27/2021

Multi-Image Visual Question Answering

While a lot of work has been done on developing models to tackle the pro...
research
10/07/2020

Vision Skills Needed to Answer Visual Questions

The task of answering questions about images has garnered attention as a...
research
03/05/2023

VTQA: Visual Text Question Answering via Entity Alignment and Cross-Media Reasoning

The ideal form of Visual Question Answering requires understanding, grou...
research
03/23/2017

Recurrent and Contextual Models for Visual Question Answering

We propose a series of recurrent and contextual neural network models fo...

Please sign up or login with your details

Forgot password? Click here to reset