MLANet: Multi-Level Attention Network with Sub-instruction for Continuous Vision-and-Language Navigation

03/02/2023
by   Zongtao He, et al.
0

Vision-and-Language Navigation (VLN) aims to develop intelligent agents to navigate in unseen environments only through language and vision supervision. In the recently proposed continuous settings (continuous VLN), the agent must act in a free 3D space and faces tougher challenges like real-time execution, complex instruction understanding, and long action sequence prediction. For a better performance in continuous VLN, we design a multi-level instruction understanding procedure and propose a novel model, Multi-Level Attention Network (MLANet). The first step of MLANet is to generate sub-instructions efficiently. We design a Fast Sub-instruction Algorithm (FSA) to segment the raw instruction into sub-instructions and generate a new sub-instruction dataset named “FSASub". FSA is annotation-free and faster than the current method by 70 times, thus fitting the real-time requirement in continuous VLN. To solve the complex instruction understanding problem, MLANet needs a global perception of the instruction and observations. We propose a Multi-Level Attention (MLA) module to fuse vision, low-level semantics, and high-level semantics, which produce features containing a dynamic and global comprehension of the task. MLA also mitigates the adverse effects of noise words, thus ensuring a robust understanding of the instruction. To correctly predict actions in long trajectories, MLANet needs to focus on what sub-instruction is being executed every step. We propose a Peak Attention Loss (PAL) to improve the flexible and adaptive selection of the current sub-instruction. PAL benefits the navigation agent by concentrating its attention on the local information, thus helping the agent predict the most appropriate actions. We train and test MLANet in the standard benchmark. Experiment results show MLANet outperforms baselines by a significant margin.

READ FULL TEXT

page 1

page 10

research
04/06/2020

Sub-Instruction Aware Vision-and-Language Navigation

Vision-and-language navigation requires an agent to navigate through a r...
research
02/18/2023

VLN-Trans: Translator for the Vision and Language Navigation Agent

Language understanding is essential for the navigation agent to follow i...
research
08/18/2023

Multi-Level Compositional Reasoning for Interactive Instruction Following

Robotic agents performing domestic chores by natural language directives...
research
10/05/2021

Waypoint Models for Instruction-guided Navigation in Continuous Environments

Little inquiry has explicitly addressed the role of action spaces in lan...
research
10/18/2022

ULN: Towards Underspecified Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) is a task to guide an embodied agen...
research
08/06/2023

Language-based Photo Color Adjustment for Graphic Designs

Adjusting the photo color to associate with some design elements is an e...
research
08/09/2023

Bird's-Eye-View Scene Graph for Vision-Language Navigation

Vision-language navigation (VLN), which entails an agent to navigate 3D ...

Please sign up or login with your details

Forgot password? Click here to reset