Log In Sign Up

Bridging the Imitation Gap by Adaptive Insubordination

by   Luca Weihs, et al.

Why do agents often obtain better reinforcement learning policies when imitating a worse expert? We show that privileged information used by the expert is marginalized in the learned agent policy, resulting in an "imitation gap." Prior work bridges this gap via a progression from imitation learning to reinforcement learning. While often successful, gradual progression fails for tasks that require frequent switches between exploration and memorization skills. To better address these tasks and alleviate the imitation gap we propose 'Adaptive Insubordination' (ADVISOR), which dynamically reweights imitation and reward-based reinforcement learning losses during training, enabling switching between imitation and exploration. On a suite of challenging tasks, we show that ADVISOR outperforms pure imitation, pure reinforcement learning, as well as sequential combinations of these approaches.


Error Bounds of Imitating Policies and Environments

Imitation learning trains a policy by mimicking expert demonstrations. V...

Efficient Exploration with Self-Imitation Learning via Trajectory-Conditioned Policy

This paper proposes a method for learning a trajectory-conditioned polic...

Hybrid Reinforcement Learning with Expert State Sequences

Existing imitation learning approaches often require that the complete d...

Reinforcement and Imitation Learning for Diverse Visuomotor Skills

We propose a model-free deep reinforcement learning method that leverage...

On Covariate Shift of Latent Confounders in Imitation and Reinforcement Learning

We consider the problem of using expert data with unobserved confounders...

Accelerating Training in Pommerman with Imitation and Reinforcement Learning

The Pommerman simulation was recently developed to mimic the classic Jap...

Accelerating Reinforcement Learning through Implicit Imitation

Imitation can be viewed as a means of enhancing learning in multiagent e...