Inverse Reinforcement Learning in the Continuous Setting with Formal Guarantees

02/16/2021
by   Gregory Dexter, et al.
0

Inverse Reinforcement Learning (IRL) is the problem of finding a reward function which describes observed/known expert behavior. IRL is useful for automated control in situations where the reward function is difficult to specify manually, which impedes reinforcement learning. We provide a new IRL algorithm for the continuous state space setting with unknown transition dynamics by modeling the system using a basis of orthonormal functions. We provide a proof of correctness and formal guarantees on the sample and time complexity of our algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2019

On the Correctness and Sample Complexity of Inverse Reinforcement Learning

Inverse reinforcement learning (IRL) is the problem of finding a reward ...
research
02/25/2020

Human Apprenticeship Learning via Kernel-based Inverse Reinforcement Learning

This paper considers if a reward function learned via inverse reinforcem...
research
10/24/2018

Inverse reinforcement learning for video games

Deep reinforcement learning achieves superhuman performance in a range o...
research
06/18/2018

A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress

Inverse reinforcement learning is the problem of inferring the reward fu...
research
01/25/2016

Towards Resolving Unidentifiability in Inverse Reinforcement Learning

We consider a setting for Inverse Reinforcement Learning (IRL) where the...
research
08/20/2021

Plug and Play, Model-Based Reinforcement Learning

Sample-efficient generalisation of reinforcement learning approaches hav...
research
05/09/2012

Exploring compact reinforcement-learning representations with linear regression

This paper presents a new algorithm for online linear regression whose e...

Please sign up or login with your details

Forgot password? Click here to reset