Augmenting Imitation Experience via Equivariant Representations

10/14/2021
by   Dhruv Sharma, et al.
9

The robustness of visual navigation policies trained through imitation often hinges on the augmentation of the training image-action pairs. Traditionally, this has been done by collecting data from multiple cameras, by using standard data augmentations from computer vision, such as adding random noise to each image, or by synthesizing training images. In this paper we show that there is another practical alternative for data augmentation for visual navigation based on extrapolating viewpoint embeddings and actions nearby the ones observed in the training data. Our method makes use of the geometry of the visual navigation problem in 2D and 3D and relies on policies that are functions of equivariant embeddings, as opposed to images. Given an image-action pair from a training navigation dataset, our neural network model predicts the latent representations of images at nearby viewpoints, using the equivariance property, and augments the dataset. We then train a policy on the augmented dataset. Our simulation results indicate that policies trained in this way exhibit reduced cross-track error, and require fewer interventions compared to policies trained using standard augmentation methods. We also show similar results in autonomous visual navigation by a real ground robot along a path of over 500m.

READ FULL TEXT

page 1

page 3

page 6

research
07/11/2018

Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

Model-free reinforcement learning has recently been shown to be effectiv...
research
05/19/2021

VOILA: Visual-Observation-Only Imitation Learning for Autonomous Navigation

While imitation learning for vision based autonomous mobile robot naviga...
research
05/15/2018

Visual Representations for Semantic Target Driven Navigation

What is a good visual representation for autonomous agents? We address t...
research
10/26/2020

On Embodied Visual Navigation in Real Environments Through Habitat

Visual navigation models based on deep learning can learn effective poli...
research
10/07/2022

GNM: A General Navigation Model to Drive Any Robot

Learning provides a powerful tool for vision-based navigation, but the c...
research
09/30/2020

Towards Target-Driven Visual Navigation in Indoor Scenes via Generative Imitation Learning

We present a target-driven navigation system to improve mapless visual n...
research
11/18/2021

Complex Terrain Navigation via Model Error Prediction

Robot navigation traditionally relies on building an explicit map that i...

Please sign up or login with your details

Forgot password? Click here to reset