Qualitative Controller Synthesis for Consumption Markov Decision Processes

05/14/2020
by   František Blahoudek, et al.
0

Consumption Markov Decision Processes (CMDPs) are probabilistic decision-making models of resource-constrained systems. In a CMDP, the controller possesses a certain amount of a critical resource, such as electric power. Each action of the controller can consume some amount of the resource. Resource replenishment is only possible in special reload states, in which the resource level can be reloaded up to the full capacity of the system. The task of the controller is to prevent resource exhaustion, i.e. ensure that the available amount of the resource stays non-negative, while ensuring an additional linear-time property. We study the complexity of strategy synthesis in consumption MDPs with almost-sure Büchi objectives. We show that the problem can be solved in polynomial time. We implement our algorithm and show that it can efficiently solve CMDPs modelling real-world scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2021

Efficient Strategy Synthesis for MDPs with Resource Constraints

We consider qualitative strategy synthesis for the formalism called cons...
research
05/04/2021

Polynomial-Time Algorithms for Multi-Agent Minimal-Capacity Planning

We study the problem of minimizing the resource capacity of autonomous a...
research
12/20/2017

Temporal logic control of general Markov decision processes by approximate policy refinement

The formal verification and controller synthesis for Markov decision pro...
research
09/10/2018

Multi-weighted Markov Decision Processes with Reachability Objectives

In this paper, we are interested in the synthesis of schedulers in doubl...
research
07/10/2023

Deductive Controller Synthesis for Probabilistic Hyperproperties

Probabilistic hyperproperties specify quantitative relations between the...
research
11/28/2022

Shielding in Resource-Constrained Goal POMDPs

We consider partially observable Markov decision processes (POMDPs) mode...
research
08/09/2014

POMDPs under Probabilistic Semantics

We consider partially observable Markov decision processes (POMDPs) with...

Please sign up or login with your details

Forgot password? Click here to reset