Logistic Q-Learning

10/21/2020
by   Joan Bas-Serrano, et al.
0

We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs. The method is closely related to the classic Relative Entropy Policy Search (REPS) algorithm of Peters et al. (2010), with the key difference that our method introduces a Q-function that enables efficient exact model-free implementation. The main feature of our algorithm (called QREPS) is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error. We provide a practical saddle-point optimization method for minimizing this loss function and provide an error-propagation analysis that relates the quality of the individual updates to the performance of the output policy. Finally, we demonstrate the effectiveness of our method on a range of benchmark problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2017

A unified view of entropy-regularized Markov decision processes

We propose a general framework for entropy-regularized average-reward re...
research
06/20/2020

Model-Free Robust Reinforcement Learning with Linear Function Approximation

This paper addresses the problem of model-free reinforcement learning fo...
research
12/05/2018

Relative Entropy Regularized Policy Iteration

We present an off-policy actor-critic algorithm for Reinforcement Learni...
research
12/29/2017

Smoothed Dual Embedding Control

We revisit the Bellman optimality equation with Nesterov's smoothing tec...
research
05/27/2022

KL-Entropy-Regularized RL with a Generative Model is Minimax Optimal

In this work, we consider and analyze the sample complexity of model-fre...
research
03/02/2020

Gaussian Process Policy Optimization

We propose a novel actor-critic, model-free reinforcement learning algor...
research
09/10/2023

Convex Q Learning in a Stochastic Environment: Extended Version

The paper introduces the first formulation of convex Q-learning for Mark...

Please sign up or login with your details

Forgot password? Click here to reset