Object-and-Action Aware Model for Visual Language Navigation

07/29/2020
by   Yuankai Qi, et al.
2

Vision-and-Language Navigation (VLN) is unique in that it requires turning relatively general natural-language instructions into robot agent actions, on the basis of the visible environment. This requires to extract value from two very different types of natural-language information. The first is object description (e.g., 'table', 'door'), each presenting as a tip for the agent to determine the next action by finding the item visible in the environment, and the second is action specification (e.g., 'go straight', 'turn left') which allows the robot to directly predict the next movements without relying on visual perceptions. However, most existing methods pay few attention to distinguish these information from each other during instruction encoding and mix together the matching between textual object/action encoding and visual perception/orientation features of candidate viewpoints. In this paper, we propose an Object-and-Action Aware Model (OAAM) that processes these two different forms of natural language based instruction separately. This enables each process to match object-centered/action-centered instruction to their own counterpart visual perception/action orientation flexibly. However, one side-issue caused by above solution is that an object mentioned in instructions may be observed in the direction of two or more candidate viewpoints, thus the OAAM may not predict the viewpoint on the shortest path as the next action. To handle this problem, we design a simple but effective path loss to penalize trajectories deviating from the ground truth path. Experimental results demonstrate the effectiveness of the proposed model and path loss, and the superiority of their combination with a 50 40 previous state-of-the-art.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 13

page 14

09/30/2021

Language-Aligned Waypoint (LAW) Supervision for Vision-and-Language Navigation in Continuous Environments

In the Vision-and-Language Navigation (VLN) task an embodied agent navig...
07/15/2021

Neighbor-view Enhanced Model for Vision and Language Navigation

Vision and Language Navigation (VLN) requires an agent to navigate to a ...
08/26/2021

Visual-and-Language Navigation: A Survey and Taxonomy

An agent that can understand natural-language instruction and carry out ...
01/09/2021

Are We There Yet? Learning to Localize in Embodied Instruction Following

Embodied instruction following is a challenging problem requiring an age...
06/07/2018

Speaker-Follower Models for Vision-and-Language Navigation

Navigation guided by natural language instructions presents a challengin...
06/01/2021

Look Wide and Interpret Twice: Improving Performance on Interactive Instruction-following Tasks

There is a growing interest in the community in making an embodied AI ag...
03/07/2022

Find a Way Forward: a Language-Guided Semantic Map Navigator

This paper attacks the problem of language-guided navigation in a new pe...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.