Energy-Based Imitation Learning
We tackle a common scenario in imitation learning (IL), where agents try to recover the optimal policy from expert demonstrations without further access to the expert or environment reward signals. The classical inverse reinforcement learning (IRL) solution involves bi-level optimization and is of high computational cost. Recent generative adversarial methods formulate the IL problem as occupancy measure matching, which, however, suffer from the notorious training instability and mode-dropping problems. Inspired by recent progress in energy-based model (EBM), in this paper, we propose a novel IL framework named Energy-Based Imitation Learning (EBIL), solving the IL problem via estimating the expert energy as the surrogate reward function through score matching. EBIL combines the idea of both EBM and occupancy measure matching, which enjoys: (1) high model flexibility for expert policy distribution estimation; (2) efficient computation that avoids the previous alternate training fashion. Though motivated by matching the policy between the expert and the agent, we surprisingly find a non-trivial connection between EBIL and the classic Max-Entropy IRL (MaxEnt IRL) approach, and further show that EBIL can be seen as a simpler and more efficient solution of MaxEnt IRL. Extensive experiments show that EBIL can always achieve comparable performance against state-of-the-art methods with less computation cost.
READ FULL TEXT