DeepAI AI Chat
Log In Sign Up

Robust Imitation via Mirror Descent Inverse Reinforcement Learning

by   Dong-Sig Han, et al.

Recently, adversarial imitation learning has shown a scalable reward acquisition method for inverse reinforcement learning (IRL) problems. However, estimated reward signals often become uncertain and fail to train a reliable statistical model since the existing methods tend to solve hard optimization problems directly. Inspired by a first-order optimization method called mirror descent, this paper proposes to predict a sequence of reward functions, which are iterative solutions for a constrained convex problem. IRL solutions derived by mirror descent are tolerant to the uncertainty incurred by target density estimation since the amount of reward learning is regulated with respect to local geometric constraints. We prove that the proposed mirror descent update rule ensures robust minimization of a Bregman divergence in terms of a rigorous regret bound of 𝒪(1/T) for step sizes {η_t}_t=1^T. Our IRL method was applied on top of an adversarial framework, and it outperformed existing adversarial methods in an extensive suite of benchmarks.


f-IRL: Inverse Reinforcement Learning via State Marginal Matching

Imitation learning is well-suited for robotic tasks where it is difficul...

Dialogue Generation: From Imitation Learning to Inverse Reinforcement Learning

The performance of adversarial dialogue generation models relies on the ...

IQ-Learn: Inverse soft-Q Learning for Imitation

In many sequential decision-making problems (e.g., robotics control, gam...

Addressing reward bias in Adversarial Imitation Learning with neutral reward functions

Generative Adversarial Imitation Learning suffers from the fundamental p...

A proof of imitation of Wasserstein inverse reinforcement learning for multi-objective optimization

We prove Wasserstein inverse reinforcement learning enables the learner'...

Adversarial Exploration Strategy for Self-Supervised Imitation Learning

We present an adversarial exploration strategy, a simple yet effective i...

Differentiable Robust LQR Layers

This paper proposes a differentiable robust LQR layer for reinforcement ...