A Notation for Markov Decision Processes

12/30/2015 ∙ by Philip S. Thomas, et al. ∙ 0

This paper specifies a notation for Markov decision processes.



There are no comments yet.


page 1

page 2

page 3

Code Repositories


Unified notation for Markov Decision Processes PO(MDP)s

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many reinforcement learning (RL) research papers contain paragraphs that define Markov decision processes (MDPs). These paragraphs take up space that could otherwise be used to present more useful content. In this paper we specify a notation for MDPs that can be used by other papers. Declaring the use this notation using a single sentence can replace several paragraphs of notational specifications in other papers. Importantly, the notation that we define is a common foundation that appears in many RL papers, and is not meant to be a complete notation for an entire paper.

We refer to our notation as the Markov Decision Process Notation, version 1 or MDPNv1. It can be invoked in research papers with the sentence:

“We use the notational standard MDPNv1.”

This sentence denotes that the notation specified in this document should be inserted at the current location. One challenge with this system is that any reasonably complete notation will define a large subset of the commonly used mathematical symbols, some of which an author may wish to use with a meaning other than that specified in MDPNv1. To overcome this problem, definitions that occur after the sentence invoking MDPNv1 can modify or overwrite the definitions in MDPNv1.

For example, an author may write “We assume that the state and action sets are finite,” which overrules MDPNv1’s more general definition of the state and action sets, or “Let denote the set of all possible advantage functions,” which overwrites the definition of in MDPNv1 (where it is the set of possible actions). In general, MDPNv1 should serve as a notational foundation, which the author is free to build upon or remove from to best suit the needs of the paper.

This paper is not an introduction to RL. It assumes that the reader is already familiar with the basic concepts of RL, as covered by Sutton and Barto (1998). Also, we try to minimize the number of assumptions that we make. This means that authors using our notation will have to specify their own assumptions, rather than specify which of our assumptions must be removed.

Billy Okal has provided a style file for MDPNv1 at https://github.com/makokal/MDPN. Not only does this style file allow you to easily switch between the different notational variants defined below, but using it allows you to change the notation used in your paper by modifying the style file rather than by editing every equation individually.

2 Discrete and Continuous Random Variables

In general, the state, action, and reward at time

can be discrete or continuous random variables, or even a mixture of both. A discrete random variable,

, that takes values in a set, , has a probability mass function (PMF), , such that for all

. However, continuous random variables (and random variables that are a mixture of discrete and continuous) are not characterized by a PMF. Although measure theoretic probability offers a unified notation for discussing arbitrary random variables, its use is not commonplace in reinforcement learning literature, and so it may dilute the message of a paper and shrink a paper’s audience.

We therefore introduce an abuse of notation into MDPNv1: notationally, we treat the state, action, and reward as though they are discrete random variables, even though they may not be. That is, our expressions are written using PMFs for distributions over states, actions, and rewards, even if they should technically be written using probability measures. The author of a paper using MDPNv1 should ensure that all claims carry over to states, actions, and rewards that are arbitrary random variables, or should explicitly restrict the states, actions, and rewards to be discrete random variables or continuous random variables that have density functions.

3 Markov Decision Process Notation, Version 1 (MDPNv1)

Let a Markov decision process (MDP) be a tuple, , where

  1. We use to denote the time step, where denotes the natural numbers including zero.

  2. is the set of possible states that the agent can be in, and is called the state set. The state of the environment at time is a random variable that we denote by . We will typically use to denote an element of the state set.

  3. is the set of possible actions that the agent can select between, and is called the action set. The action chosen by the agent at time is a random variable that we denote by . We will typically use to denote a specific element of the action set.

  4. is the set of possible rewards that the agent can receive, and is called the reward set. The reward provided to the agent at time is a random variable that we denote by . We will typically use to denote an element of the reward set. Let and be the infimum and supremum of , respectively.

  5. is called the transition function. For all , let .111Notice that we use to denote “is defined to be”. That is, characterizes the distribution over states at time given the state and action at time . We introduce a Markov assumption: the distribution over is independent of all prior events given and . That is, the distribution over states at time is fully determined by the state and action at time , and this distribution is characterized by .

    We allow three alternate notations for . First, let . This form takes approximately the same amount of space, but makes it more clear that is a conditional distribution over the next state given the current state and action. Second, let . This notation moves terms into subscripts and superscripts in order to save some space. Third, let . This final form is particularly useful when space is limited. Although the author is allowed to select between the four notations for , the use of should be consistent within each paper.

  6. is called the reward function. For all , let . That is, characterizes the distribution over rewards at time given and . We introduce another Markov assumption: the distribution of is independent of all prior events given and . Also notice that the reward function, , has no subscripts or superscripts, unlike the visually similar reward at time , .

    As with , we allow for several alternate notations for that the author is free to select from. Let .

  7. We call the initial state distribution, since for all .

  8. Let be the reward discount parameter, which may be used to discount rewards based on how far in the future they occur.

Let be called a policy. A policy specifies the distribution over given , i.e., for all . All policies are assumed to be Markovian—the distribution of is independent of prior events given . Let be the set of all possible policies. If there exists a state, , and two unique actions, , where , and both and have non-zero probability in , i.e., and , then we refer to as a stochastic policy, and we refer to it as a deterministic policy otherwise. Let be an alternate definition of a deterministic policy. We allow several additional shorthands: and .

We abuse notation and give a second definition. It should be clear from context which definition is intended. Let , where . Let denote a

-dimensional vector called the

policy parameters, and let for all . We call this definition of a parameterized policy. We allow several shorthands: . Similarly, is a parameterized deterministic policy, and .

An episode is one sequence of states, actions, and rewards, starting from and continuing indefinitely. An MDP may have a state, , called the terminal absorbing state. In the state only one action can be taken. Taking this action causes a transition back to and results in a reward of zero. Once the agent reaches the system has effectively terminated since there are no more decisions to be made or rewards to collect. If a state, always causes a transition to with a reward of zero, then we call a terminal state. Let be the horizon of the MDP, i.e., the smallest time step such that for all , and if no such time step exists.

4 Discussion

In this section we discuss some of the decisions that we made regarding notation. In general, we use calligraphic capital letters for sets, e.g., . Elements of sets are lowercase letters that are typically similar to the set they belong to, e.g., . Random variables are denoted by capital letters, e.g., , and their instantiations by lowercase letters, e.g., . Vectors are bold lowercase letters, like .

Although we would have liked to use lowercase letters for real-valued functions, we use and to denote real-valued functions. This is for two different reasons. First, we use rather than because is a commonly used symbol that we would like to avoid defining (notice that we have not defined or , all of which are commonly used symbols). Second, we use because is already used to denote an element of , and to preserve alliteration we do not want to use a different letter. Although is visually similar to , it is typically clear from context which is intended, even if the reader does not notice the subscript or lack thereof.

Sometimes the set of actions that can be selected by the agent changes depending on the state of the environment. We do not include this in our notation because it is rarely used in the literature. If the author wishes to include this additional structure in an MDP, then we recommend using to denote the set of actions that can be chosen in the state . However, this is not part of MDPNv1, and must be specified by the author.

Often MDPs are defined without explicitly defining the set of possible rewards, . We include so that the author can write for some . This is useful because the two obvious choices for implicit definitions of both have problems: , while technically valid, may be confusing since the reals are typically integrated over, and does not allow for rewards that are not integers.

Although there are many other terms that we could include in MDPNv1, we have decided to only define the terms necessary to define an MDP. This both makes it easier for the reader to remember which terms are defined by MDPNv1 and avoids including controversial definitions. Furthermore, it avoids limiting the setting to only the discounted or average-reward setting (we could define symbols for both settings, but this would be unnecessarily complex).

5 LaTeX Style File Usage

In this section we demonstrate how to use the style file accompanying this text.

  1. The package can be included using any of three options: alpha, beta, kappa.

    1    % ...
    2    \usepackage[alpha]{mdpn}  % Most verbose
    3    %\usepackage[beta]{mdpn}  % Compressed
    4    %\usepackage[kappa]{mdpn}  % Most compressed
    5    % ...
  2. You can use any of the defined commands in text as:

    1    % ...
    2    Some text $\command$, for example $\sset$ for state set
    3    % ...

    Some of the commands require a specific number of arguments that should be provided in the order indicated. For example \T requires three arguments: the current state , current action and next state . So, \T{s}{a}{s’} will produce .

  3. Most of the commands allow usual modifications such as subscripts and superscripts. For example, \pp (which denotes a parametrised policy) can be modified to \pp_{sub} to yield .


  • Sutton and Barto (1998) R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.