
On the Correctness and Sample Complexity of Inverse Reinforcement Learning
Inverse reinforcement learning (IRL) is the problem of finding a reward ...
read it

Efficient Probabilistic Performance Bounds for Inverse Reinforcement Learning
In the field of reinforcement learning there has been recent progress to...
read it

Active TaskInferenceGuided Deep Inverse Reinforcement Learning
In inverse reinforcement learning (IRL), given a Markov decision process...
read it

Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization
Reinforcement learning can acquire complex behaviors from highlevel spe...
read it

Deception in Optimal Control
In this paper, we consider an adversarial scenario where one agent seeks...
read it

Apprenticeship Learning using Inverse Reinforcement Learning and Gradient Methods
In this paper we propose a novel gradient algorithm to learn a policy fr...
read it

CWAEIRL: Formulating a supervised approach to Inverse Reinforcement Learning problem
Inverse reinforcement learning (IRL) is used to infer the reward functio...
read it
Continuous Inverse Optimal Control with Locally Optimal Examples
Inverse optimal control, also known as inverse reinforcement learning, is the problem of recovering an unknown reward function in a Markov decision process from expert demonstrations of the optimal policy. We introduce a probabilistic inverse optimal control algorithm that scales gracefully with task dimensionality, and is suitable for large, continuous domains where even computing a full policy is impractical. By using a local approximation of the reward function, our method can also drop the assumption that the demonstrations are globally optimal, requiring only local optimality. This allows it to learn from examples that are unsuitable for prior methods.
READ FULL TEXT
Comments
There are no comments yet.