Sub-Instruction Aware Vision-and-Language Navigation

04/06/2020
by   Yicong Hong, et al.
0

Vision-and-language navigation requires an agent to navigate through a real 3D environment following a given natural language instruction. Despite significant advances, few previous works are able to fully utilize the strong correspondence between the visual and textual sequences. Meanwhile, due to the lack of intermediate supervision, the agent's performance at following each part of the instruction remains untrackable during navigation. In this work, we focus on the granularity of the visual and language sequences as well as the trackability of agents through the completion of instruction. We provide agents with fine-grained annotations during training and find that they are able to follow the instruction better and have a higher chance of reaching the target at test time. We enrich the previous dataset with sub-instructions and their corresponding paths. To make use of this data, we propose an effective sub-instruction attention and shifting modules that attend and select a single sub-instruction at each time-step. We implement our sub-instruction modules in four state-of-the-art agents, compare with their baseline model, and show that our proposed method improves the performance of all four agents.

READ FULL TEXT

page 1

page 7

research
12/08/2021

Contrastive Instruction-Trajectory Learning for Vision-Language Navigation

The vision-language navigation (VLN) task requires an agent to reach a t...
research
09/30/2021

Language-Aligned Waypoint (LAW) Supervision for Vision-and-Language Navigation in Continuous Environments

In the Vision-and-Language Navigation (VLN) task an embodied agent navig...
research
03/02/2023

MLANet: Multi-Level Attention Network with Sub-instruction for Continuous Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) aims to develop intelligent agents ...
research
04/30/2020

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web

Following a navigation instruction such as 'Walk down the stairs and sto...
research
03/28/2023

KERM: Knowledge Enhanced Reasoning for Vision-and-Language Navigation

Vision-and-language navigation (VLN) is the task to enable an embodied a...
research
11/07/2022

Prompter: Utilizing Large Language Model Prompting for a Data Efficient Embodied Instruction Following

Embodied Instruction Following (EIF) studies how mobile manipulator robo...
research
05/19/2023

Multimodal Web Navigation with Instruction-Finetuned Foundation Models

The progress of autonomous web navigation has been hindered by the depen...

Please sign up or login with your details

Forgot password? Click here to reset