Touchdown: Natural Language Navigation and Spatial Reasoning in Visual Street Environments

11/29/2018
by   Howard Chen, et al.
10

We study the problem of jointly reasoning about language and vision through a navigation and spatial reasoning task. We introduce the Touchdown task and dataset, where an agent must first follow navigation instructions in a real-life visual urban environment to a goal position, and then identify in the observed image a location described in natural language to find a hidden object. The data contains 9,326 examples of English instructions and spatial descriptions paired with demonstrations. We perform qualitative linguistic analysis, and show that the data displays richer use of spatial reasoning compared to related resources. Empirical analysis shows the data presents an open challenge to existing methods.

READ FULL TEXT

page 4

page 14

page 15

page 16

page 17

page 18

page 19

page 20

05/14/2021

Towards Navigation by Reasoning over Spatial Configurations

We deal with the navigation problem where the agent follows natural lang...
07/12/2023

VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street View

Incremental decision making in real-world environments is one of the mos...
11/01/2018

A Corpus for Reasoning About Natural Language Grounded in Photographs

We introduce a new dataset for joint reasoning about language and vision...
12/10/2017

Learning Interpretable Spatial Operations in a Rich 3D Blocks World

In this paper, we study the problem of mapping natural language instruct...
07/02/2023

HeGeL: A Novel Dataset for Geo-Location from Hebrew Text

The task of textual geolocation - retrieving the coordinates of a place ...
09/19/2019

RUN through the Streets: A New Dataset and Baseline Models for Realistic Urban Navigation

Following navigation instructions in natural language requires a composi...
07/01/2020

Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation

In the vision-and-language navigation (VLN) task, an agent follows natur...

Code Repositories

touchdown

Cornell Touchdown natural language navigation and spatial reasoning dataset.


view repo

ciff

Cornell Instruction Following Framework


view repo

VLN-Transformer

Implementation of "Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation"


view repo

touchdown

Cloned from https://github.com/lil-lab/touchdown


view repo

Please sign up or login with your details

Forgot password? Click here to reset