Sound Value Iteration

by   Tim Quatmann, et al.

Computing reachability probabilities is at the heart of probabilistic model checking. All model checkers compute these probabilities in an iterative fashion using value iteration. This technique approximates a fixed point from below by determining reachability probabilities for an increasing number of steps. To avoid results that are significantly off, variants have recently been proposed that converge from both below and above. These procedures require starting values for both sides. We present an alternative that does not require the a priori computation of starting vectors and that converges faster on many benchmarks. The crux of our technique is to give tight and safe bounds - whose computation is cheap - on the reachability probabilities. Lifting this technique to expected rewards is trivial for both Markov chains and MDPs. Experimental results on a large set of benchmarks show its scalability and efficiency.



There are no comments yet.


page 1

page 2

page 3

page 4


Optimistic Value Iteration

Markov decision processes are widely used for planning and verification ...

Bayesian Inference by Symbolic Model Checking

This paper applies probabilistic model checking techniques for discrete ...

Model Checking Finite-Horizon Markov Chains with Probabilistic Inference

We revisit the symbolic verification of Markov chains with respect to fi...

Symblicit Exploration and Elimination for Probabilistic Model Checking

Binary decision diagrams can compactly represent vast sets of states, mi...

Algorithms for reachability problems on stochastic Markov reward models

Probabilistic model-checking is a field which seeks to automate the form...

Ranking and Repulsing Supermartingales for Reachability in Probabilistic Programs

Computing reachability probabilities is a fundamental problem in the ana...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.