Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments

11/20/2017
by   Peter Anderson, et al.
0

A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent advances in vision and language methods have made incredible progress in closely related areas. This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering. Both tasks can be interpreted as visually grounded sequence-to-sequence translation problems, and many of the same methods are applicable. To enable and encourage the application of vision and language methods to the problem of interpreting visually-grounded navigation instructions, we present the Matterport3D Simulator -- a large-scale reinforcement learning environment based on real imagery. Using this simulator, which can in future support a range of embodied vision and language tasks, we provide the first benchmark dataset for visually-grounded natural language navigation in real buildings -- the Room-to-Room (R2R) dataset.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 11

page 12

research
12/03/2019

ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

We present ALFRED (Action Learning From Realistic Environments and Direc...
research
10/28/2020

The Amazing Race TM: Robot Edition

State-of-the-art natural-language-driven autonomous-navigation systems g...
research
09/16/2020

Generative Language-Grounded Policy in Vision-and-Language Navigation with Bayes' Rule

Vision-and-language navigation (VLN) is a task in which an agent is embo...
research
10/07/2022

Learning a Visually Grounded Memory Assistant

We introduce a novel interface for large scale collection of human memor...
research
01/19/2021

A modular vision language navigation and manipulation framework for long horizon compositional tasks in indoor environment

In this paper we propose a new framework - MoViLan (Modular Vision and L...
research
08/09/2019

Transferable Representation Learning in Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) re...
research
05/29/2019

Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation

Advances in learning and representations have reinvigorated work that co...

Please sign up or login with your details

Forgot password? Click here to reset