Faster Algorithms for Quantitative Analysis of Markov Chains and Markov Decision Processes with Small Treewidth

04/19/2020
by   Ali Asadi, et al.
0

Discrete-time Markov Chains (MCs) and Markov Decision Processes (MDPs) are two standard formalisms in system analysis. Their main associated quantitative objectives are hitting probabilities, discounted sum, and mean payoff. Although there are many techniques for computing these objectives in general MCs/MDPs, they have not been thoroughly studied in terms of parameterized algorithms, particularly when treewidth is used as the parameter. This is in sharp contrast to qualitative objectives for MCs, MDPs and graph games, for which treewidth-based algorithms yield significant complexity improvements. In this work, we show that treewidth can also be used to obtain faster algorithms for the quantitative problems. For an MC with n states and m transitions, we show that each of the classical quantitative objectives can be computed in O((n+m)· t^2) time, given a tree decomposition of the MC that has width t. Our results also imply a bound of O(κ· (n+m)· t^2) for each objective on MDPs, where κ is the number of strategy-iteration refinements required for the given input and objective. Finally, we make an experimental evaluation of our new algorithms on low-treewidth MCs and MDPs obtained from the DaCapo benchmark suite. Our experimental results show that on MCs and MDPs with small treewidth, our algorithms outperform existing well-established methods by one or more orders of magnitude.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/06/2017

Efficient Strategy Iteration for Mean Payoff in Markov Decision Processes

Markov decision processes (MDPs) are standard models for probabilistic s...
06/16/2020

Partial Policy Iteration for L1-Robust Markov Decision Processes

Robust Markov decision processes (MDPs) allow to compute reliable soluti...
12/12/2012

Qualitative MDPs and POMDPs: An Order-Of-Magnitude Approximation

We develop a qualitative theory of Markov Decision Processes (MDPs) and ...
03/31/2018

Symbolic Algorithms for Graphs and Markov Decision Processes with Fairness Objectives

Given a model and a specification, the fundamental model-checking proble...
04/25/2018

Distribution-based objectives for Markov Decision Processes

We consider distribution-based objectives for Markov Decision Processes ...
01/29/2021

Optimistic Policy Iteration for MDPs with Acyclic Transient State Structure

We consider Markov Decision Processes (MDPs) in which every stationary p...
12/05/2012

Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning

Many problems in sequential decision making and stochastic control often...