Vision-Dialog Navigation by Exploring Cross-modal Memory

03/15/2020
by   Yi Zhu, et al.
7

Vision-dialog navigation posed as a new holy-grail task in vision-language disciplinary targets at learning an agent endowed with the capability of constant conversation for help with natural language and navigating according to human responses. Besides the common challenges faced in visual language navigation, vision-dialog navigation also requires to handle well with the language intentions of a series of questions about the temporal context from dialogue history and co-reasoning both dialogs and visual scenes. In this paper, we propose the Cross-modal Memory Network (CMN) for remembering and understanding the rich information relevant to historical navigation actions. Our CMN consists of two memory modules, the language memory module (L-mem) and the visual memory module (V-mem). Specifically, L-mem learns latent relationships between the current language interaction and a dialog history by employing a multi-head attention mechanism. V-mem learns to associate the current visual views and the cross-modal memory about the previous navigation actions. The cross-modal memory is generated via a vision-to-language attention and a language-to-vision attention. Benefiting from the collaborative learning of the L-mem and the V-mem, our CMN is able to explore the memory about the decision making of historical navigation actions which is for the current step. Experiments on the CVDN dataset show that our CMN outperforms the previous state-of-the-art model by a significant margin on both seen and unseen environments.

READ FULL TEXT

page 1

page 4

page 8

research
04/15/2022

Improving Cross-Modal Understanding in Visual Dialog via Contrastive Learning

Visual Dialog is a challenging vision-language task since the visual dia...
research
03/10/2022

Cross-modal Map Learning for Vision and Language Navigation

We consider the problem of Vision-and-Language Navigation (VLN). The maj...
research
07/10/2019

Vision-and-Dialog Navigation

Robots navigating in human environments should use language to ask for a...
research
05/26/2023

GeoVLN: Learning Geometry-Enhanced Visual Representation with Slot Attention for Vision-and-Language Navigation

Most existing works solving Room-to-Room VLN problem only utilize RGB im...
research
09/13/2019

Co-Attentive Cross-Modal Deep Learning for Medical Evidence Synthesis and Decision Making

Modern medicine requires generalised approaches to the synthesis and int...
research
06/17/2022

Local Slot Attention for Vision-and-Language Navigation

Vision-and-language navigation (VLN), a frontier study aiming to pave th...
research
07/02/2022

Enabling Harmonious Human-Machine Interaction with Visual-Context Augmented Dialogue System: A Review

The intelligent dialogue system, aiming at communicating with humans har...

Please sign up or login with your details

Forgot password? Click here to reset