Log In Sign Up

Topological Planning with Transformers for Vision-and-Language Navigation

by   Kevin Chen, et al.

Conventional approaches to vision-and-language navigation (VLN) are trained end-to-end but struggle to perform well in freely traversable environments. Inspired by the robotics community, we propose a modular approach to VLN using topological maps. Given a natural language instruction and topological map, our approach leverages attention mechanisms to predict a navigation plan in the map. The plan is then executed with low-level actions (e.g. forward, rotate) using a robust controller. Experiments show that our method outperforms previous end-to-end approaches, generates interpretable navigation plans, and exhibits intelligent behaviors such as backtracking.


page 5

page 8

page 12

page 13

page 14

page 19

page 20

page 21


Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation

We propose an end-to-end deep learning model for translating free-form n...

Lifelong Topological Visual Navigation

The ability for a robot to navigate with only the use of vision is appea...

Cross-modal Map Learning for Vision and Language Navigation

We consider the problem of Vision-and-Language Navigation (VLN). The maj...

Learning Composable Behavior Embeddings for Long-horizon Visual Navigation

Learning high-level navigation behaviors has important implications: it ...

Find a Way Forward: a Language-Guided Semantic Map Navigator

This paper attacks the problem of language-guided navigation in a new pe...

Hierarchical Robot Navigation in Novel Environments using Rough 2-D Maps

In robot navigation, generalizing quickly to unseen environments is esse...

ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints

Robotic navigation has been approached as a problem of 3D reconstruction...