DeepAI
Log In Sign Up

Apprenticeship Learning using Inverse Reinforcement Learning and Gradient Methods

06/20/2012
by   Gergely Neu, et al.
0

In this paper we propose a novel gradient algorithm to learn a policy from an expert's observed behavior assuming that the expert behaves optimally with respect to some unknown reward function of a Markovian Decision Problem. The algorithm's aim is to find a reward function such that the resulting optimal policy matches well the expert's observed behavior. The main difficulty is that the mapping from the parameters to policies is both nonsmooth and highly redundant. Resorting to subdifferentials solves the first difficulty, while the second one is over- come by computing natural gradients. We tested the proposed method in two artificial domains and found it to be more reliable and efficient than some previous methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/15/2020

Inverse Reinforcement Learning from a Gradient-based Learner

Inverse Reinforcement Learning addresses the problem of inferring an exp...
06/18/2012

Continuous Inverse Optimal Control with Locally Optimal Examples

Inverse optimal control, also known as inverse reinforcement learning, i...
11/09/2020

f-IRL: Inverse Reinforcement Learning via State Marginal Matching

Imitation learning is well-suited for robotic tasks where it is difficul...
11/28/2017

Hierarchical Policy Search via Return-Weighted Density Estimation

Learning an optimal policy from a multi-modal reward function is a chall...
06/01/2018

Learning convex bounds for linear quadratic control policy synthesis

Learning to make decisions from observed data in dynamic environments re...
05/21/2018

Learning Safe Policies with Expert Guidance

We propose a framework for ensuring safe behavior of a reinforcement lea...
01/30/2018

Learning to Emulate an Expert Projective Cone Scheduler

Projective cone scheduling defines a large class of rate-stabilizing pol...