Structured Reachability Analysis for Markov Decision Processes

01/30/2013
by   Craig Boutilier, et al.
0

Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more feasible. We develop a family of algorithms for structured reachability analysis of MDPs that are suitable when an initial state (or set of states) is known. Using compact, structured representations of MDPs (e.g., Bayesian networks), our methods, which vary in the tradeoff between complexity and accuracy, produce structured descriptions of (estimated) reachable states that can be used to eliminate variables or variable values from the problem description, reducing the size of the MDP and making it easier to solve. One contribution of our work is the extension of ideas from GRAPHPLAN to deal with the distributed nature of action representations typically embodied within Bayes nets and the problem of correlated action effects. We also demonstrate that our algorithm can be made more complete by using k-ary constraints instead of binary constraints. Another contribution is the illustration of how the compact representation of reachability constraints can be exploited by several existing (exact and approximate) abstraction algorithms for MDPs.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 6

page 7

page 8

page 9

research
02/06/2013

Correlated Action Effects in Decision Theoretic Regression

Much recent research in decision theoretic planning has adopted Markov d...
research
03/23/2018

Counterexamples for Robotic Planning Explained in Structured Language

Automated techniques such as model checking have been used to verify mod...
research
05/27/2011

Decision-Theoretic Planning: Structural Assumptions and Computational Leverage

Planning under uncertainty is a central problem in the study of automate...
research
03/23/2023

Stochastic Decision Petri Nets

We introduce stochastic decision Petri nets (SDPNs), which are a form of...
research
01/03/2019

Reachability and Differential based Heuristics for Solving Markov Decision Processes

The solution convergence of Markov Decision Processes (MDPs) can be acce...
research
01/04/2019

Solving Markov Decision Processes with Reachability Characterization from Mean First Passage Times

A new mechanism for efficiently solving the Markov decision processes (M...
research
06/09/2009

Feature Reinforcement Learning: Part I: Unstructured MDPs

General-purpose, intelligent, learning agents cycle through sequences of...

Please sign up or login with your details

Forgot password? Click here to reset