Continuous-Time Markov Decisions based on Partial Exploration

07/25/2018
by   Pranav Ashok, et al.
0

We provide a framework for speeding up algorithms for time-bounded reachability analysis of continuous-time Markov decision processes. The principle is to find a small, but almost equivalent subsystem of the original system and only analyse the subsystem. Candidates for the subsystem are identified through simulations and iteratively enlarged until runs are represented in the subsystem with high enough probability. The framework is thus dual to that of abstraction refinement. We instantiate the framework in several ways with several traditional algorithms and experimentally confirm orders-of-magnitude speed ups in many cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2020

Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs

We present two elegant solutions for modeling continuous-time dynamics, ...
research
06/09/2020

On Decidability of Time-bounded Reachability in CTMDPs

We consider the time-bounded reachability problem for continuous-time Ma...
research
06/17/2019

Of Cores: A Partial-Exploration Framework for Markov Decision Processes

We introduce a framework for approximate analysis of Markov decision pro...
research
04/27/2022

Bounds for Synchronizing Markov Decision Processes

We consider Markov decision processes with synchronizing objectives, whi...
research
06/13/2012

Gibbs Sampling in Factorized Continuous-Time Markov Processes

A central task in many applications is reasoning about processes that ch...
research
07/06/2021

Scaling up Continuous-Time Markov Chains Helps Resolve Underspecification

Modeling the time evolution of discrete sets of items (e.g., genetic mut...
research
02/17/2020

The Probabilistic Model Checker Storm

We present the probabilistic model checker Storm. Storm supports the ana...

Please sign up or login with your details

Forgot password? Click here to reset