
Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning
While most approaches to the problem of Inverse Reinforcement Learning (...
read it

Impossibility of deducing preferences and rationality from human policy
Inverse reinforcement learning (IRL) attempts to infer human rewards or ...
read it

A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress
Inverse reinforcement learning is the problem of inferring the reward fu...
read it

Reinforcement Learning with a Corrupted Reward Channel
No realworld reward function is perfect. Sensory errors and software bu...
read it

CWAEIRL: Formulating a supervised approach to Inverse Reinforcement Learning problem
Inverse reinforcement learning (IRL) is used to infer the reward functio...
read it

Identifying Reward Functions using Anchor Actions
We propose a reward function estimation framework for inverse reinforcem...
read it

Langevin Dynamics for Inverse Reinforcement Learning of Stochastic Gradient Algorithms
Inverse reinforcement learning (IRL) aims to estimate the reward functio...
read it
Inverse Reinforcement Learning with Simultaneous Estimation of Rewards and Dynamics
Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent. Since the agent's behavior originates in its policy and MDP policies depend on both the stochastic system dynamics as well as the reward function, the solution of the inverse problem is significantly influenced by both. Current IRL approaches assume that if the transition model is unknown, additional samples from the system's dynamics are accessible, or the observed behavior provides enough samples of the system's dynamics to solve the inverse problem accurately. These assumptions are often not satisfied. To overcome this, we present a gradientbased IRL approach that simultaneously estimates the system's dynamics. By solving the combined optimization problem, our approach takes into account the bias of the demonstrations, which stems from the generating policy. The evaluation on a synthetic MDP and a transfer learning task shows improvements regarding the sample efficiency as well as the accuracy of the estimated reward functions and transition models.
READ FULL TEXT
Comments
There are no comments yet.