From Visual Place Recognition to Navigation: Learning Sample-Efficient Control Policies across Diverse Real World Environments

10/10/2019
by   Marvin Chancán, et al.
10

Visual navigation tasks in real world environments often require both self-motion and place recognition feedback. While deep reinforcement learning has shown success in solving these perception and decision-making problems in an end-to-end manner, these algorithms require large amounts of experience to learn navigation policies from high-dimensional inputs, which is generally impractical for real robots due to sample complexity. In this paper, we address these problems with two main contributions. We first leverage place recognition and deep learning techniques combined with goal destination feedback to generate compact, bimodal images representations that can then be used to effectively learn control policies at kilometer scale from a small amount of experience. Second, we present an interactive and realistic framework, called CityLearn, that enables for the first time the training of navigation algorithms across city-sized, real-world environments with extreme environmental changes. CityLearn features over 10 benchmark real-world datasets often used in place recognition research with more than 100 recorded traversals and across 60 cities around the world. We evaluate our approach in two CityLearn environments where our navigation policy is trained using a single traversal. Results show our method can be over 2 orders of magnitude faster than when using raw images and can also generalize across extreme visual changes including day to night and summer to winter transitions.

READ FULL TEXT

page 1

page 2

page 5

page 6

research
03/02/2020

MVP: Unified Motion and Visual Self-Supervised Learning for Large-Scale Robotic Navigation

Autonomous navigation emerges from both motion and local visual percepti...
research
03/31/2018

Learning to Navigate in Cities Without a Map

Navigating through unstructured environments is a basic capability of in...
research
06/16/2020

Robot Perception enables Complex Navigation Behavior via Self-Supervised Learning

Learning visuomotor control policies in robotic systems is a fundamental...
research
10/15/2019

A Compact Neural Architecture for Visual Place Recognition

State-of-the-art algorithms for visual place recognition can be broadly ...
research
10/24/2018

Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks

Contact-rich manipulation tasks in unstructured environments often requi...
research
07/28/2019

Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks

Contact-rich manipulation tasks in unstructured environments often requi...
research
07/11/2018

Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

Model-free reinforcement learning has recently been shown to be effectiv...

Please sign up or login with your details

Forgot password? Click here to reset