Imitation from Observation With Bootstrapped Contrastive Learning

02/13/2023
by   Medric Sonwa, et al.
0

Imitation from observation (IfO) is a learning paradigm that consists of training autonomous agents in a Markov Decision Process (MDP) by observing expert demonstrations without access to its actions. These demonstrations could be sequences of environment states or raw visual observations of the environment. Recent work in IfO has focused on this problem in the case of observations of low-dimensional environment states, however, access to these highly-specific observations is unlikely in practice. In this paper, we adopt a challenging, but more realistic problem formulation, learning control policies that operate on a learned latent space with access only to visual demonstrations of an expert completing a task. We present BootIfOL, an IfO algorithm that aims to learn a reward function that takes an agent trajectory and compares it to an expert, providing rewards based on similarity to agent behavior and implicit goal. We consider this reward function to be a distance metric between trajectories of agent behavior and learn it via contrastive learning. The contrastive learning objective aims to closely represent expert trajectories and to distance them from non-expert trajectories. The set of non-expert trajectories used in contrastive learning is made progressively more complex by bootstrapping from roll-outs of the agent learned through RL using the current reward function. We evaluate our approach on a variety of control tasks showing that we can train effective policies using a limited number of demonstrative trajectories, greatly improving on prior approaches that consider raw observations.

READ FULL TEXT

page 3

page 6

page 14

research
04/06/2019

Reinforced Imitation in Heterogeneous Action Space

Imitation learning is an effective alternative approach to learn a polic...
research
10/11/2017

Specification Inference from Demonstrations

Learning from expert demonstrations has received a lot of attention in a...
research
07/12/2019

Learning an Urban Air Mobility Encounter Model from Expert Preferences

Airspace models have played an important role in the development and eva...
research
03/03/2023

Learning Stabilization Control from Observations by Learning Lyapunov-like Proxy Models

The deployment of Reinforcement Learning to robotics applications faces ...
research
07/28/2020

Learning Stable Manoeuvres in Quadruped Robots from Expert Demonstrations

With the research into development of quadruped robots picking up pace, ...
research
11/03/2019

Learning from Trajectories via Subgoal Discovery

Learning to solve complex goal-oriented tasks with sparse terminal-only ...
research
04/28/2020

Augmented Behavioral Cloning from Observation

Imitation from observation is a computational technique that teaches an ...

Please sign up or login with your details

Forgot password? Click here to reset