Data augmentation for efficient learning from parametric experts

05/23/2022
by   Alexandre Galashov, et al.
0

We present a simple, yet powerful data-augmentation technique to enable data-efficient learning from parametric experts for reinforcement and imitation learning. We focus on what we call the policy cloning setting, in which we use online or offline queries of an expert or expert policy to inform the behavior of a student policy. This setting arises naturally in a number of problems, for instance as variants of behavior cloning, or as a component of other algorithms such as DAGGER, policy distillation or KL-regularized RL. Our approach, augmented policy cloning (APC), uses synthetic states to induce feedback-sensitivity in a region around sampled trajectories, thus dramatically reducing the environment interactions required for successful cloning of the expert. We achieve highly data-efficient transfer of behavior from an expert to a student policy for high-degrees-of-freedom control problems. We demonstrate the benefit of our method in the context of several existing and widely used algorithms that include policy cloning as a constituent part. Moreover, we highlight the benefits of our approach in two practically relevant settings (a) expert compression, i.e. transfer to a student with fewer parameters; and (b) transfer from privileged experts, i.e. where the expert has a different observation space than the student, usually including access to privileged information.

READ FULL TEXT

page 18

page 19

page 20

research
05/30/2022

TaSIL: Taylor Series Imitation Learning

We propose Taylor Series Imitation Learning (TaSIL), a simple augmentati...
research
01/21/2020

Loss-annealed GAIL for sample efficient and stable Imitation Learning

Imitation learning is the problem of learning a policy from an expert po...
research
03/25/2021

Adversarial Imitation Learning with Trajectorial Augmentation and Correction

Deep Imitation Learning requires a large number of expert demonstrations...
research
02/28/2022

LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation

We consider the problem of imitation from observation (IfO), in which th...
research
11/08/2021

Off-policy Imitation Learning from Visual Inputs

Recently, various successful applications utilizing expert states in imi...
research
03/09/2023

An Improved Data Augmentation Scheme for Model Predictive Control Policy Approximation

This paper considers the problem of data generation for MPC policy appro...
research
06/18/2019

RadGrad: Active learning with loss gradients

Solving sequential decision prediction problems, including those in imit...

Please sign up or login with your details

Forgot password? Click here to reset