-
Inverse Reinforcement Learning from a Gradient-based Learner
Inverse Reinforcement Learning addresses the problem of inferring an exp...
read it
-
Continuous Inverse Optimal Control with Locally Optimal Examples
Inverse optimal control, also known as inverse reinforcement learning, i...
read it
-
f-IRL: Inverse Reinforcement Learning via State Marginal Matching
Imitation learning is well-suited for robotic tasks where it is difficul...
read it
-
Hierarchical Policy Search via Return-Weighted Density Estimation
Learning an optimal policy from a multi-modal reward function is a chall...
read it
-
Learning convex bounds for linear quadratic control policy synthesis
Learning to make decisions from observed data in dynamic environments re...
read it
-
Towards Diverse Text Generation with Inverse Reinforcement Learning
Text generation is a crucial task in NLP. Recently, several adversarial ...
read it
-
Learning Safe Policies with Expert Guidance
We propose a framework for ensuring safe behavior of a reinforcement lea...
read it
Apprenticeship Learning using Inverse Reinforcement Learning and Gradient Methods
In this paper we propose a novel gradient algorithm to learn a policy from an expert's observed behavior assuming that the expert behaves optimally with respect to some unknown reward function of a Markovian Decision Problem. The algorithm's aim is to find a reward function such that the resulting optimal policy matches well the expert's observed behavior. The main difficulty is that the mapping from the parameters to policies is both nonsmooth and highly redundant. Resorting to subdifferentials solves the first difficulty, while the second one is over- come by computing natural gradients. We tested the proposed method in two artificial domains and found it to be more reliable and efficient than some previous methods.
READ FULL TEXT
Comments
There are no comments yet.