A Relation Analysis of Markov Decision Process Frameworks

08/18/2020
by   Tien Mai, et al.
0

We study the relation between different Markov Decision Process (MDP) frameworks in the machine learning and econometrics literatures, including the standard MDP, the entropy and general regularized MDP, and stochastic MDP, where the latter is based on the assumption that the reward function is stochastic and follows a given distribution. We show that the entropy-regularized MDP is equivalent to a stochastic MDP model, and is strictly subsumed by the general regularized MDP. Moreover, we propose a distributional stochastic MDP framework by assuming that the distribution of the reward function is ambiguous. We further show that the distributional stochastic MDP is equivalent to the regularized MDP, in the sense that they always yield the same optimal policies. We also provide a connection between stochastic/regularized MDP and constrained MDP. Our work gives a unified view on several important MDP frameworks, which would lead new ways to interpret the (entropy/general) regularized MDP frameworks through the lens of stochastic rewards and vice-versa. Given the recent popularity of regularized MDP in (deep) reinforcement learning, our work brings new understandings of how such algorithmic schemes work and suggest ideas to develop new ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2021

Robust Entropy-regularized Markov Decision Processes

Stochastic and soft optimal policies resulting from entropy-regularized ...
research
09/30/2022

Application of Deep Q Learning with Stimulation Results for Elevator Optimization

This paper presents a methodology for combining programming and mathemat...
research
12/10/2019

Before we can find a model, we must forget about perfection

With Reinforcement Learning we assume that a model of the world does exi...
research
10/21/2017

Insulin Regimen ML-based control for T2DM patients

We model individual T2DM patient blood glucose level (BGL) by stochasti...
research
12/06/2018

Provably Efficient Maximum Entropy Exploration

Suppose an agent is in a (possibly unknown) Markov decision process (MDP...
research
09/11/2011

Decision-Theoretic Planning with non-Markovian Rewards

A decision process in which rewards depend on history rather than merely...
research
11/05/2019

Apprenticeship Learning via Frank-Wolfe

We consider the applications of the Frank-Wolfe (FW) algorithm for Appre...

Please sign up or login with your details

Forgot password? Click here to reset