Dual Attention Networks for Multimodal Reasoning and Matching

11/02/2016
by   Hyeonseob Nam, et al.
0

We propose Dual Attention Networks (DANs) which jointly leverage visual and textual attention mechanisms to capture fine-grained interplay between vision and language. DANs attend to specific regions in images and words in text through multiple steps and gather essential information from both modalities. Based on this framework, we introduce two types of DANs for multimodal reasoning and matching, respectively. The reasoning model allows visual and textual attentions to steer each other during collaborative inference, which is useful for tasks such as Visual Question Answering (VQA). In addition, the matching model exploits the two attention mechanisms to estimate the similarity between images and sentences by focusing on their shared semantics. Our extensive experiments validate the effectiveness of DANs in combining vision and language, achieving the state-of-the-art performance on public benchmarks for VQA and image-text matching.

READ FULL TEXT

page 1

page 6

page 7

page 8

research
09/27/2021

Multimodal Integration of Human-Like Attention in Visual Question Answering

Human-like attention as a supervisory signal to guide neural attention h...
research
11/23/2020

Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-Attention

Accurate and efficient product classification is significant for E-comme...
research
11/02/2020

The Devil is in the Details: Evaluating Limitations of Transformer-based Methods for Granular Tasks

Contextual embeddings derived from transformer-based neural language mod...
research
04/02/2021

VisQA: X-raying Vision and Language Reasoning in Transformers

Visual Question Answering systems target answering open-ended textual qu...
research
07/26/2023

LOIS: Looking Out of Instance Semantics for Visual Question Answering

Visual question answering (VQA) has been intensively studied as a multim...
research
09/30/2019

Multi-Head Attention with Diversity for Learning Grounded Multilingual Multimodal Representations

With the aim of promoting and understanding the multilingual version of ...
research
03/12/2021

Dual Attention-in-Attention Model for Joint Rain Streak and Raindrop Removal

Rain streaks and rain drops are two natural phenomena, which degrade ima...

Please sign up or login with your details

Forgot password? Click here to reset