LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning

03/01/2023
by   Firas Al-Hafez, et al.
0

Recent methods for imitation learning directly learn a Q-function using an implicit reward formulation rather than an explicit reward function. However, these methods generally require implicit reward regularization to improve stability and often mistreat absorbing states. Previous works show that a squared norm regularization on the implicit reward function is effective, but do not provide a theoretical analysis of the resulting properties of the algorithms. In this work, we show that using this regularizer under a mixture distribution of the policy and the expert provides a particularly illuminating perspective: the original objective can be understood as squared Bellman error minimization, and the corresponding optimization problem minimizes a bounded χ^2-Divergence between the expert and the mixture distribution. This perspective allows us to address instabilities and properly treat absorbing states. We show that our method, Least Squares Inverse Q-Learning (LS-IQ), outperforms state-of-the-art algorithms, particularly in environments with absorbing states. Finally, we propose to use an inverse dynamics model to learn from observations only. Using this approach, we retain performance in settings where no expert actions are available.

READ FULL TEXT
research
04/06/2019

Reinforced Imitation in Heterogeneous Action Space

Imitation learning is an effective alternative approach to learn a polic...
research
11/09/2020

f-IRL: Inverse Reinforcement Learning via State Marginal Matching

Imitation learning is well-suited for robotic tasks where it is difficul...
research
06/19/2019

Wasserstein Adversarial Imitation Learning

Imitation Learning describes the problem of recovering an expert policy ...
research
06/01/2022

Transferable Reward Learning by Dynamics-Agnostic Discriminator Ensemble

Inverse reinforcement learning (IRL) recovers the underlying reward func...
research
02/09/2023

CLARE: Conservative Model-Based Reward Learning for Offline Inverse Reinforcement Learning

This work aims to tackle a major challenge in offline Inverse Reinforcem...
research
09/02/2022

TarGF: Learning Target Gradient Field for Object Rearrangement

Object Rearrangement is to move objects from an initial state to a goal ...
research
11/06/2019

A Divergence Minimization Perspective on Imitation Learning Methods

In many settings, it is desirable to learn decision-making and control p...

Please sign up or login with your details

Forgot password? Click here to reset