Reachability analysis in stochastic directed graphs by reinforcement learning

02/25/2022
by   Corrado Possieri, et al.
0

We characterize the reachability probabilities in stochastic directed graphs by means of reinforcement learning methods. In particular, we show that the dynamics of the transition probabilities in a stochastic digraph can be modeled via a difference inclusion, which, in turn, can be interpreted as a Markov decision process. Using the latter framework, we offer a methodology to design reward functions to provide upper and lower bounds on the reachability probabilities of a set of nodes for stochastic digraphs. The effectiveness of the proposed technique is demonstrated by application to the diffusion of epidemic diseases over time-varying contact networks generated by the proximity patterns of mobile agents.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2019

On the Complexity of Reachability in Parametric Markov Decision Processes

This paper studies parametric Markov decision processes (pMDPs), an exte...
research
07/06/2023

Exploiting Adjoints in Property Directed Reachability Analysis

We formulate, in lattice-theoretic terms, two novel algorithms inspired ...
research
06/28/2021

Armoured Fighting Vehicle Team Performance Prediction against Missile Attacks with Directed Energy Weapons

A recent study has introduced a procedure to quantify the survivability ...
research
12/09/2011

KL-learning: Online solution of Kullback-Leibler control problems

We introduce a stochastic approximation method for the solution of an er...
research
07/02/2019

Learning the Arrow of Time

We humans seem to have an innate understanding of the asymmetric progres...
research
02/11/2019

Stochastic Reinforcement Learning

In reinforcement learning episodes, the rewards and punishments are ofte...

Please sign up or login with your details

Forgot password? Click here to reset