History Aware Multimodal Transformer for Vision-and-Language Navigation

10/25/2021
by   Shizhe Chen, et al.
5

Vision-and-language navigation (VLN) aims to build autonomous visual agents that follow instructions and navigate in real scenes. To remember previously visited locations and actions taken, most approaches to VLN implement memory using recurrent states. Instead, we introduce a History Aware Multimodal Transformer (HAMT) to incorporate a long-horizon history into multimodal decision making. HAMT efficiently encodes all the past panoramic observations via a hierarchical vision transformer (ViT), which first encodes individual images with ViT, then models spatial relation between images in a panoramic observation and finally takes into account temporal relation between panoramas in the history. It, then, jointly combines text, history and current observation to predict the next action. We first train HAMT end-to-end using several proxy tasks including single step action prediction and spatial relation prediction, and then use reinforcement learning to further improve the navigation policy. HAMT achieves new state of the art on a broad range of VLN tasks, including VLN with fine-grained instructions (R2R, RxR), high-level instructions (R2R-Last, REVERIE), dialogs (CVDN) as well as long-horizon VLN (R4R, R2R-Back). We demonstrate HAMT to be particularly effective for navigation tasks with longer trajectories.

READ FULL TEXT

page 20

page 21

page 22

page 23

research
05/13/2021

Episodic Transformer for Vision-and-Language Navigation

Interaction and navigation defined by natural language instructions in d...
research
03/09/2019

Scene Memory Transformer for Embodied Agents in Long-Horizon Tasks

Many robotic applications require the agent to perform long-horizon task...
research
11/10/2021

Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) is a task that an agent is required...
research
01/24/2022

Learning to Act with Affordance-Aware Multimodal Neural SLAM

Recent years have witnessed an emerging paradigm shift toward embodied a...
research
03/22/2022

HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation

Pre-training has been adopted in a few of recent works for Vision-and-La...
research
07/16/2023

Breaking Down the Task: A Unit-Grained Hybrid Training Framework for Vision and Language Decision Making

Vision language decision making (VLDM) is a challenging multimodal task....
research
09/16/2021

End-to-End Partially Observable Visual Navigation in a Diverse Environment

How can a robot navigate successfully in a rich and diverse environment,...

Please sign up or login with your details

Forgot password? Click here to reset