
Multikernel Passive Stochastic Gradient Algorithms
This paper develops a novel passive stochastic gradient algorithm. In pa...
read it

Inverse Reinforcement Learning with Simultaneous Estimation of Rewards and Dynamics
Inverse Reinforcement Learning (IRL) describes the problem of learning a...
read it

Human Apprenticeship Learning via Kernelbased Inverse Reinforcement Learning
This paper considers if a reward function learned via inverse reinforcem...
read it

On the Correctness and Sample Complexity of Inverse Reinforcement Learning
Inverse reinforcement learning (IRL) is the problem of finding a reward ...
read it

CWAEIRL: Formulating a supervised approach to Inverse Reinforcement Learning problem
Inverse reinforcement learning (IRL) is used to infer the reward functio...
read it

Apprenticeship Learning using Inverse Reinforcement Learning and Gradient Methods
In this paper we propose a novel gradient algorithm to learn a policy fr...
read it

Inverse Reinforcement Learning with Multiple Ranked Experts
We consider the problem of learning to behave optimally in a Markov Deci...
read it
Langevin Dynamics for Inverse Reinforcement Learning of Stochastic Gradient Algorithms
Inverse reinforcement learning (IRL) aims to estimate the reward function of optimizing agents by observing their response (estimates or actions). This paper considers IRL when noisy estimates of the gradient of a reward function generated by multiple stochastic gradient agents are observed. We present a generalized Langevin dynamics algorithm to estimate the reward function R(θ); specifically, the resulting Langevin algorithm asymptotically generates samples from the distribution proportional to (R(θ)). The proposed IRL algorithms use kernelbased passive learning schemes. We also construct multikernel passive Langevin algorithms for IRL which are suitable for high dimensional data. The performance of the proposed IRL algorithms are illustrated on examples in adaptive Bayesian learning, logistic regression (high dimensional problem) and constrained Markov decision processes. We prove weak convergence of the proposed IRL algorithms using martingale averaging methods. We also analyze the tracking performance of the IRL algorithms in nonstationary environments where the utility function R(θ) jump changes over time as a slow Markov chain.
READ FULL TEXT
Comments
There are no comments yet.