MVP: Unified Motion and Visual Self-Supervised Learning for Large-Scale Robotic Navigation

03/02/2020
by   Marvin Chancán, et al.
26

Autonomous navigation emerges from both motion and local visual perception in real-world environments. However, most successful robotic motion estimation methods (e.g. VO, SLAM, SfM) and vision systems (e.g. CNN, visual place recognition-VPR) are often separately used for mapping and localization tasks. Conversely, recent reinforcement learning (RL) based methods for visual navigation rely on the quality of GPS data reception, which may not be reliable when directly using it as ground truth across multiple, month-spaced traversals in large environments. In this paper, we propose a novel motion and visual perception approach, dubbed MVP, that unifies these two sensor modalities for large-scale, target-driven navigation tasks. Our MVP-based method can learn faster, and is more accurate and robust to both extreme environmental changes and poor GPS data than corresponding vision-only navigation methods. MVP temporally incorporates compact image representations, obtained using VPR, with optimized motion estimation data, including but not limited to those from VO or optimized radar odometry (RO), to efficiently learn self-supervised navigation policies via RL. We evaluate our method on two large real-world datasets, Oxford Robotcar and Nordland Railway, over a range of weather (e.g. overcast, night, snow, sun, rain, clouds) and seasonal (e.g. winter, spring, fall, summer) conditions using the new CityLearn framework; an interactive environment for efficiently training navigation agents. Our experimental results, on traversals of the Oxford RobotCar dataset with no GPS data, show that MVP can achieve 53 respectively, compared to 7 trade-off between the RL success rate and the motion estimation precision.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
06/16/2020

Robot Perception enables Complex Navigation Behavior via Self-Supervised Learning

Learning visuomotor control policies in robotic systems is a fundamental...
research
10/10/2019

From Visual Place Recognition to Navigation: Learning Sample-Efficient Control Policies across Diverse Real World Environments

Visual navigation tasks in real world environments often require both se...
research
03/08/2022

Tune your Place Recognition: Self-Supervised Domain Calibration via Robust SLAM

Visual place recognition techniques based on deep learning, which have i...
research
10/02/2022

Unsupervised Vision and Vision-motion Calibration Strategies for PointGoal Navigation in Indoor Environment

PointGoal navigation in indoor environment is a fundamental task for per...
research
04/29/2023

Modality-invariant Visual Odometry for Embodied Vision

Effectively localizing an agent in a realistic, noisy setting is crucial...
research
07/15/2021

CMU-GPR Dataset: Ground Penetrating Radar Dataset for Robot Localization and Mapping

There has been exciting recent progress in using radar as a sensor for r...
research
06/02/2022

Is Mapping Necessary for Realistic PointGoal Navigation?

Can an autonomous agent navigate in a new environment without building a...

Please sign up or login with your details

Forgot password? Click here to reset