Talk2Nav: Long-Range Vision-and-Language Navigation in Cities

10/04/2019
by   Arun Balajee Vasudevan, et al.
21

Autonomous driving models often consider the goal as fixed at the start of the ride. Yet, in practice, passengers will still want to influence the route, e.g. to pick up something along the way. In order to keep such inputs intuitive, we provide automatic way finding in cities based on verbal navigational instructions and street-view images. Our first contribution is the creation of a large-scale dataset with verbal navigation instructions. To this end, we have developed an interactive visual navigation environment based on Google Street View; we further design an annotation method to highlight mined anchor landmarks and local directions between them in order to help annotators formulate typical, human references to those. The annotation task was crowdsourced on the AMT platform, to construct a new Talk2Nav dataset with 10,714 routes. Our second contribution is a new learning method. Inspired by spatial cognition research on the mental conceptualization of navigational instructions, we introduce a soft attention mechanism defined over the segmented language instructions to jointly extract two partial instructions -- one for matching the next upcoming visual landmark and the other for matching the local directions to the next landmark. On the similar lines, we also introduce memory scheme to encode the local directional transitions. Our work takes advantage of the advance in two lines of research: mental formalization of verbal navigational instructions and training neural network agents for automatic way finding. Extensive experiments show that our method significantly outperforms previous navigation methods. For demo video, dataset and code, please refer to our \href{https://www.trace.ethz.ch/publications/2019/talk2nav/index.html}{project page}.

READ FULL TEXT

page 3

page 5

page 8

page 14

research
12/30/2020

Generating Landmark Navigation Instructions from Maps as a Graph-to-Text Problem

Car-focused navigation services are based on turns and distances of name...
research
04/02/2019

Pharos: improving navigation instructions on smartwatches by including global landmarks

Landmark-based navigation systems have proven benefits relative to tradi...
research
02/05/2020

From Route Instructions to Landmark Graphs

Landmarks are central to how people navigate, but most navigation techno...
research
07/12/2023

VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street View

Incremental decision making in real-world environments is one of the mos...
research
03/01/2019

Learning To Follow Directions in Street View

Navigating and understanding the real world remains a key challenge in m...
research
11/25/2021

Less is More: Generating Grounded Navigation Instructions from Landmarks

We study the automatic generation of navigation instructions from 360-de...
research
11/01/2020

Can a Robot Trust You? A DRL-Based Approach to Trust-Driven Human-Guided Navigation

Humans are known to construct cognitive maps of their everyday surroundi...

Please sign up or login with your details

Forgot password? Click here to reset