Zero-shot Imitation Learning from Demonstrations for Legged Robot Visual Navigation

09/27/2019
by   Xinlei Pan, et al.
19

Imitation learning is a popular approach for training effective visual navigation policies. However, for legged robots collecting expert demonstrations is challenging as these robotic systems can be hard to control, move slowly, and cannot operate continuously for long periods of time. In this work, we propose a zero-shot imitation learning framework for training a visual navigation policy on a legged robot from only human demonstration (third-person perspective), allowing for high-quality navigation and cost-effective data collection. However, imitation learning from third-person perspective demonstrations raises unique challenges. First, these human demonstrations are captured from different camera perspectives, which we address via a feature disentanglement network (FDN) that extracts perspective-agnostic state features. Second, as potential actions vary between systems, we reconstruct missing action labels by either building an inverse model of the robot's dynamics in the feature space and applying it to the demonstrations or developing an efficient Graphic User Interface (GUI) to label human demonstrations. To train a visual navigation policy we use a model-based imitation learning approach with the perspective-agnostic FDN and action-labeled demonstrations. We show that our framework can learn an effective policy for a legged robot, Laikago, from expert demonstrations in both simulated and real-world environments. Our approach is zero-shot as the robot never tries to navigate a given navigation path in the testing environment before the testing phase. We also justify our framework by performing an ablation study and comparing it with baselines.

READ FULL TEXT

page 1

page 5

page 6

research
02/04/2022

BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning

In this paper, we study the problem of enabling a vision-based robotic m...
research
11/01/2018

Navigation by Imitation in a Pedestrian-Rich Environment

Deep neural networks trained on demonstrations of human actions give rob...
research
08/01/2022

Learning to Navigate using Visual Sensor Networks

We consider the problem of navigating a mobile robot towards a target in...
research
05/22/2022

Chain of Thought Imitation with Procedure Cloning

Imitation learning aims to extract high-performance policies from logged...
research
10/08/2019

Model-based Behavioral Cloning with Future Image Similarity Learning

We present a visual imitation learning framework that enables learning o...
research
09/30/2020

Towards Target-Driven Visual Navigation in Indoor Scenes via Generative Imitation Learning

We present a target-driven navigation system to improve mapless visual n...
research
10/16/2020

On the Guaranteed Almost Equivalence between Imitation Learning from Observation and Demonstration

Imitation learning from observation (LfO) is more preferable than imitat...

Please sign up or login with your details

Forgot password? Click here to reset