KL-learning: Online solution of Kullback-Leibler control problems

12/09/2011
by   Joris Bierkens, et al.
0

We introduce a stochastic approximation method for the solution of an ergodic Kullback-Leibler control problem. A Kullback-Leibler control problem is a Markov decision process on a finite state space in which the control cost is proportional to a Kullback-Leibler divergence of the controlled transition probabilities with respect to the uncontrolled transition probabilities. The algorithm discussed in this work allows for a sound theoretical analysis using the ODE method. In a numerical experiment the algorithm is shown to be comparable to the power method and the related Z-learning algorithm in terms of convergence speed. It may be used as the basis of a reinforcement learning style algorithm for Markov decision problems.

READ FULL TEXT
research
09/30/2022

Robust Q-learning Algorithm for Markov Decision Processes under Wasserstein Uncertainty

We present a novel Q-learning algorithm to solve distributionally robust...
research
06/26/2020

Approximating Euclidean by Imprecise Markov Decision Processes

Euclidean Markov decision processes are a powerful tool for modeling con...
research
11/15/2022

Reinforcement Learning Methods for Wordle: A POMDP/Adaptive Control Approach

In this paper we address the solution of the popular Wordle puzzle, usin...
research
07/29/2020

Modular Transfer Learning with Transition Mismatch Compensation for Excessive Disturbance Rejection

Underwater robots in shallow waters usually suffer from strong wave forc...
research
03/24/2022

Kullback-Leibler control for discrete-time nonlinear systems on continuous spaces

Kullback-Leibler (KL) control enables efficient numerical methods for no...
research
02/25/2022

Reachability analysis in stochastic directed graphs by reinforcement learning

We characterize the reachability probabilities in stochastic directed gr...

Please sign up or login with your details

Forgot password? Click here to reset