Continuous Inverse Optimal Control with Locally Optimal Examples

06/18/2012
by   Sergey Levine, et al.
0

Inverse optimal control, also known as inverse reinforcement learning, is the problem of recovering an unknown reward function in a Markov decision process from expert demonstrations of the optimal policy. We introduce a probabilistic inverse optimal control algorithm that scales gracefully with task dimensionality, and is suitable for large, continuous domains where even computing a full policy is impractical. By using a local approximation of the reward function, our method can also drop the assumption that the demonstrations are globally optimal, requiring only local optimality. This allows it to learn from examples that are unsuitable for prior methods.

READ FULL TEXT

page 6

page 7

research
06/02/2019

On the Correctness and Sample Complexity of Inverse Reinforcement Learning

Inverse reinforcement learning (IRL) is the problem of finding a reward ...
research
01/24/2020

Active Task-Inference-Guided Deep Inverse Reinforcement Learning

In inverse reinforcement learning (IRL), given a Markov decision process...
research
03/01/2016

Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization

Reinforcement learning can acquire complex behaviors from high-level spe...
research
10/21/2021

Inverse Optimal Control Adapted to the Noise Characteristics of the Human Sensorimotor System

Computational level explanations based on optimal feedback control with ...
research
12/30/2022

Task-Guided IRL in POMDPs that Scales

In inverse reinforcement learning (IRL), a learning agent infers a rewar...
research
06/20/2012

Apprenticeship Learning using Inverse Reinforcement Learning and Gradient Methods

In this paper we propose a novel gradient algorithm to learn a policy fr...
research
10/02/2019

CWAE-IRL: Formulating a supervised approach to Inverse Reinforcement Learning problem

Inverse reinforcement learning (IRL) is used to infer the reward functio...

Please sign up or login with your details

Forgot password? Click here to reset