Model Reduction Techniques for Computing Approximately Optimal Solutions for Markov Decision Processes

02/06/2013
by   Thomas L. Dean, et al.
0

We present a method for solving implicit (factored) Markov decision processes (MDPs) with very large state spaces. We introduce a property of state space partitions which we call epsilon-homogeneity. Intuitively, an epsilon-homogeneous partition groups together states that behave approximately the same under all or some subset of policies. Borrowing from recent work on model minimization in computer-aided software verification, we present an algorithm that takes a factored representation of an MDP and an 0<=epsilon<=1 and computes a factored epsilon-homogeneous partition of the state space. This partition defines a family of related MDPs - those MDPs with state space equal to the blocks of the partition, and transition probabilities "approximately" like those of any (original MDP) state in the source block. To formally study such families of MDPs, we introduce the new notion of a "bounded parameter MDP" (BMDP), which is a family of (traditional) MDPs defined by specifying upper and lower bounds on the transition probabilities and rewards. We describe algorithms that operate on BMDPs to find policies that are approximately optimal with respect to the original MDP. In combination, our method for reducing a large implicit MDP to a possibly much smaller BMDP using an epsilon-homogeneous partition, and our methods for selecting actions in BMDPs constitute a new approach for analyzing large implicit MDPs. Among its advantages, this new approach provides insight into existing algorithms to solving implicit MDPs, provides useful connections to work in automata theory and model minimization, and suggests methods, which involve varying epsilon, to trade time and space (specifically in terms of the size of the corresponding state space) for solution quality.

READ FULL TEXT

page 1

page 7

research
07/04/2012

Metrics for Markov Decision Processes with Infinite State Spaces

We present metrics for measuring state similarity in Markov decision pro...
research
06/24/2011

On Polynomial Sized MDP Succinct Policies

Policies of Markov Decision Processes (MDPs) determine the next action t...
research
01/30/2013

Hierarchical Solution of Markov Decision Processes using Macro-actions

We investigate the use of temporally abstract actions, or macro-actions,...
research
04/24/2018

Computational Approaches for Stochastic Shortest Path on Succinct MDPs

We consider the stochastic shortest path (SSP) problem for succinct Mark...
research
01/16/2014

Topological Value Iteration Algorithms

Value iteration is a powerful yet inefficient algorithm for Markov decis...
research
07/18/2021

A note on the article "On Exploiting Spectral Properties for Solving MDP with Large State Space"

We improve a theoretical result of the article "On Exploiting Spectral P...
research
01/03/2023

Faster Approximate Dynamic Programming by Freezing Slow States

We consider infinite horizon Markov decision processes (MDPs) with fast-...

Please sign up or login with your details

Forgot password? Click here to reset