On the Use of Non-Stationary Policies for Infinite-Horizon Discounted Markov Decision Processes

03/25/2012
by   Bruno Scherrer, et al.
0

We consider infinite-horizon γ-discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. We consider the algorithm Value Iteration and the sequence of policies π_1,...,π_k it implicitely generates until some iteration k. We provide performance bounds for non-stationary policies involving the last m generated policies that reduce the state-of-the-art bound for the last stationary policy π_k by a factor 1-γ/1-γ^m. In particular, the use of non-stationary policies allows to reduce the usual asymptotic performance bounds of Value Iteration with errors bounded by ϵ at each iteration from γ/(1-γ)^2ϵ to γ/1-γϵ, which is significant in the usual situation when γ is close to 1. Given Bellman operators that can only be computed with some error ϵ, a surprising consequence of this result is that the problem of "computing an approximately optimal non-stationary policy" is much simpler than that of "computing an approximately optimal stationary policy", and even slightly simpler than that of "approximately computing the value of some fixed policy", since this last problem only has a guarantee of 1/1-γϵ.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2012

On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes

We consider infinite-horizon stationary γ-discounted Markov Decision Pro...
research
04/20/2013

Tight Performance Bounds for Approximate Modified Policy Iteration with Non-Stationary Policies

We consider approximate dynamic programming for the infinite-horizon sta...
research
09/15/2021

Synthesizing Policies That Account For Human Execution Errors Caused By State-Aliasing In Markov Decision Processes

When humans are given a policy to execute, there can be policy execution...
research
03/01/2023

Forward-PECVaR Algorithm: Exact Evaluation for CVaR SSPs

The Stochastic Shortest Path (SSP) problem models probabilistic sequenti...
research
01/29/2019

Constraint Satisfaction Propagation: Non-stationary Policy Synthesis for Temporal Logic Planning

Problems arise when using reward functions to capture dependencies betwe...
research
10/27/2022

Confident Approximate Policy Iteration for Efficient Local Planning in q^π-realizable MDPs

We consider approximate dynamic programming in γ-discounted Markov decis...
research
07/22/2021

The Stationary Prophet Inequality Problem

We study a continuous and infinite time horizon counterpart to the class...

Please sign up or login with your details

Forgot password? Click here to reset