DeepAI
Log In Sign Up

Concentration bounds for SSP Q-learning for average cost MDPs

06/07/2022
by   Shaan ul Haque, et al.
4

We derive a concentration bound for a Q-learning algorithm for average cost Markov decision processes based on an equivalent shortest path problem, and compare it numerically with the alternative scheme based on relative value iteration.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

10/03/2022

Square-root regret bounds for continuous-time episodic Markov decision processes

We study reinforcement learning for continuous-time Markov decision proc...
07/06/2017

Efficient Strategy Iteration for Mean Payoff in Markov Decision Processes

Markov decision processes (MDPs) are standard models for probabilistic s...
07/23/2021

An Adaptive State Aggregation Algorithm for Markov Decision Processes

Value iteration is a well-known method of solving Markov Decision Proces...
03/03/2022

Risk-aware Stochastic Shortest Path

We treat the problem of risk-aware control for stochastic shortest path ...
04/24/2018

Computational Approaches for Stochastic Shortest Path on Succinct MDPs

We consider the stochastic shortest path (SSP) problem for succinct Mark...
02/08/2020

Provably Efficient Adaptive Approximate Policy Iteration

Model-free reinforcement learning algorithms combined with value functio...
02/12/2019

Partial and Conditional Expectations in Markov Decision Processes with Integer Weights

The paper addresses two variants of the stochastic shortest path problem...

I Introduction

Q-learning, introduced originally for the discounted cost Markov decision processes in [6]

, is a data-driven reinforcement learning algorithm for learning the ‘Q-factor’ function arising from the dynamic programming equation for the infinite horizon discounted reward problem. It can be viewed as a stochastic approximation counterpart of the classical value iteration for computing the value function arising as the solution of the corresponding dynamic programming equation. Going over from value function to the so called Q-factors facilitates an interchange of the conditional expectation and the nonlinearity (the minimization, to be precise) in the recursion, making it amenable for stochastic approximation. These ideas, however, do not extend automatically to the average cost problem, which is harder to analyze even when the model (i.e., the controlled transition probabilities) is readily available. The reason for this is the non-contractive nature of the associated Bellman operator. This extension was achieved in

[1] in two different ways. The first, called RVI Q-learning, is a stochastic approximation counterpart of the ‘relative value iteration’ (or RVI) algorithm for average cost [3] and is close in spirit to the original. However, there is another algorithm dubbed SSP Q-learning based on an alternative scheme due to Bertsekas [2], which does involve a contraction under a weighted max-norm. Motivated by a recent paper on concentration for stochastic approximation in [4], we present here a similar concentration for the SSP Q-learning exploiting its explicitly contractive nature, something that is missing in RVI, leading to non-trivial technical issues in providing finite time guarantees for it (see, e.g., [7]). We also provide an empirical comparison between the two with suggestive outcomes.

Section II builds up the background and section III states the key assumptions and the main result. Its proof follows in section IV. Section V describes the numerical experiments.

Ii Background

Ii-a Preliminaries

We consider a controlled Markov chain

on a finite state space with a finite action space and transition probabilities the probability of transition from to under action for . Associated with this transition is a “running cost” and the aim is to choose actions non-anticipatively (i.e., conditionally independent of the future state trajectory given past states and actions) so as to minimize the “average cost”

(1)

We shall be interested in “stationary policies” wherein for a map . It is known that an optimal stationary policy exists under the following “unichain” condition which we assume throughout: under any stationary policy the chain has a single communicating class containing a common state (say, ). The dynamic programming equation for the above problem is [3]

(2)

The unknowns are where is uniquely characterized as the optimal average cost. is only unique upto an additive constant. The associated “Q-factor” is

(3)

The aim is to get these Q-factors even when we do not know the transition probabilities, but have access to a black box which can generate random variables according to the above transition probabilities.

Ii-B SSP Q-learning

Recall the stochastic shortest path problem. Let with and . The objective is to minimize

where is the terminal cost and . Under our assumtions, a.s., in fact . The dynamic programming equation to solve this problem is given by

Coming back to average cost problem, SSP Q-learning is based on the observation that the average cost under any stationary policy is simply the ratio of the expected total cost and the expected time, between two successive visits to the reference state . This connection was exploited by [2] to convert the average cost problem into a stochastic shortest path (SSP) problem. Consider a family of SSP problem parameterized by , with the cost given by for as above and some scalar . Then the dynamic programming equation for the above SSP problem is

(4a)
(4b)

For each fixed policy, the cost is linear in with negative slope. Thus , being the lower envelope thereof, is piecewise linear with finitely many linear pieces and concave decreasing in for each component. When we replace by and force , we recover (2). This suggests the coupled iterations

(5a)
(5b)

The SSP Q-learning scheme for the above problem is [1]

(6a)
(6b)

Here is a projection operator onto the interval with chosen so as to satisfy . Although this assumes some prior knowledge of , that can be obtained by a bound on . This also ensures that (14) below holds. We rewrite the above equations as follows

(7a)
(7b)

and

As observed in [5], the map is a contraction for a fixed under a certain weighted max-norm

for an appropriate weight vector

, .

Iii Main Result

We state our main theorem in this section, after setting up the notation and assumptions. The assumptions are specifically geared for the SSP Q-learning applications in Section II-B, as will become apparent.

Consider the coupled iteration

(8)
(9)

for . Here:

  • is the ‘Markov noise’ taking values in a finite state space , i.e.,

    where for each , is the transition probability of an irreducible Markov chain on with unique stationary distribution . We assume that the map is Lipschitz, i.e., for some ,

    By Cramer’s theorem, is a rational function of with a non-vaishing denominator, so the map is similarly Lipschitz, i.e., for some ,

    See Appendix B, [4] for some bounds on .

  • is, for each , an -valued martingale difference sequence parametrized by , with respect to the increasing family of -fields , . That is,

    (10)

    where is the zero vector. We also assume the componentwise bound: for some ,

    (11)
  • satisfies

    (12)

    for some . By the contraction mapping theorem, this implies that has a unique fixed point (i.e., ). We assume that is independent of , i.e., there exists a such that

    (13)

    We also assume that the map is Lipschitz (w.l.o.g., uniformly in and ). Let the common Lipschitz constant be , i.e.,

    We assume that is concave piecewise linear and decreasing in . Furthermore, is assumed to satisfy

    (14)
  • Moreover, we assume that is Lipschitz with Lipchitz constant : for all

    (15)
  • is a sequence of stepsizes satisfying

    (16)

    and is assumed to be eventually non-increasing, i.e., there exists such that . Since , there exists such that for all .111Observe that we do not require the classical square-summability condition in stochastic approximation, viz., . This is because the contractive nature of our iterates gives us an additional handle on errors by putting less weight on past errors. A similar effect was observed in [4]. We further assume that . So, for all for some and . We also assume that there exists such that , i.e., for all for some and . Larger values of and and smaller values of improve the main result presented below. The role this assumption plays in our bounds will become clear later. Define , i.e., , is non-increasing after and . Also, it is assumed that the sequence i.e., .

For , we further define:

Our main result is a follows:

Theorem 1

(a) Let . Then there exist finite positive constants , , and , depending on , such that for , and , the inequality

(17)

holds with probability exceeding

(18)
(19)

(b) There exist finite constants , and an large enough such that for , the inequality

(20)

holds with probability exceeding

(21)
(22)

Iv Proof

We begin with a lemma adapted from [4].

Lemma 1

a.s.

Using (14), we have

For , define if and otherwise. Note that, since , for all . Then

Now . Suppose

(23)

for some . Then,

By induction, (23) holds for all , which completes the proof of Lemma 1.

Iv-a Concentration bound for the first iteration

Define and for :

(24)
(25)

We use the following theorem adapted from [4], which gives a concentration inequality for the stochastic approximation algorithm with Markov noise.

Theorem 2

Let . Then there exist finite constants , depending on , such that for , and , the inequality

(26)

holds with probability exceeding

(27)
(28)

Since , we have

(29)

Since the map is piecewise linear and concave decreasing, and therefore so is the map . By (III) and (13), we have the following lemma,

Lemma 2

From the definition of , we have

We have suppressed the subscript of , which irrelevant by virtue of (13). Let denote the standard basis vectors. Then the r.h.s. in the above can be written as

Thus we finally have

which leads us to the claim that

where .

To get a bound on we use the nonexpansive property of the projection operator as follows

where we use the fact that . Combining the above inequalities, we get

(30)

Thus,

(31)

where . Since is bounded by Lemma 1, . Iterating (31) for , we get,

(32)
(33)

where, and . The summation in last term can be bounded as

(34)

where . Note that for any ,

and hence

This implies that

(35)

Hence

(36)

Combining the above,

(37)

Combining (37) with Theorem 2 yields Theorem 1(a).

Iv-B Concentration bound for the second iteration

The second iteration is given by

(38)

Let . Subtracting from both sides, we get:

(39)

Since the map is concave decreasing and piecewise linear, we have for some finite constant such that

Replace by and by . Since :

Thus,