Model Checking Markov Population Models by Stochastic Approximations

11/10/2017 ∙ by Luca Bortolussi, et al. ∙ University of Trieste IMT School for Advanced Studies Lucca 0

Many complex systems can be described by population models, in which a pool of agents interacts and produces complex collective behaviours. We consider the problem of verifying formal properties of the underlying mathematical representation of these models, which is a Continuous Time Markov Chain, often with a huge state space. To circumvent the state space explosion, we rely on stochastic approximation techniques, which replace the large model by a simpler one, guaranteed to be probabilistically consistent. We show how to efficiently and accurately verify properties of random individual agents, specified by Continuous Stochastic Logic extended with Timed Automata (CSL-TA), and how to lift these specifications to the collective level, approximating the number of agents satisfying them using second or higher order stochastic approximation techniques.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many real-life examples of large complex systems, ranging from (natural) biochemical pathways to (artificial) computer networks, exhibit collective behaviours. These global dynamics are the result of intricate interactions between the large number of individual entities that comprise the populations of these systems. Understanding, predicting and controlling these emergent behaviours is becoming an increasingly important challenge for the scientists of the modern era. In particular, the development of an efficient and well-founded mathematical and computational modelling framework is essential to master the analysis of such complex collective systems.

In the Formal Methods community, powerful automatic verification techniques have been developed to validate the performance of a model of a system. In such model checkers mcbook, the model and a property of interest are given in input to an algorithm which verifies whether or not the requirement is satisfied by the representation of the system. As the dynamics of a collective system is intrinsically subject to noisy behaviours, especially when the population is not very large, the formal analysis and verification of a collective system have to rely on appropriate Stochastic Model Checking techniques. For instance, in ctmcmc, Continuous Stochastic Logic formulae are checked against models of the system expressed as Continuous Time Markov Chains (CTMC, norris), which are a natural mathematical framework for population models. These approaches are based on an exhaustive exploration of the state space of the model, which limits their practical use, due to state space explosion: when the number of interacting agents in the population increases, the number of states of the underlying CTMC quickly reaches astronomical values. To deal with this problem, some of the most successful applications of Stochastic Model Checking to large population models are based on statistical analysis prism; jha2009statistical; bortolussi2016smoothed, which still remain costly from a computational point of view, because of the need or running simulation algorithms a large number of times.

In this work, we take a different approach, exploiting a powerful class of methods to accurately approximate the dynamics of the individuals and the population, that goes under the name of Stochastic Approximations tutorial.

Related Work. Stochastic approximation methods have been successfully used in the computational biochemistry community grima2010; vankampen to approximate the noisy behaviour of collective systems by a simpler process whose behaviour can be extracted by solving a (numerically integrable) set of Differential Equations

(DEs), resulting in a fast and easy way of obtaining an estimation of the dynamics of the model. Moreover, for almost all the techniques that we are going to consider in this work, the quality of the estimations improves as the number of agents in the system increases, keeping constant the computational cost and reaching exactness in the limit of an infinite population. In this way, such approximation methods actually take advantage of the large sizes of the collective systems, making them a fast, accurate and reliable approach to deal with the curse of the state space explosion. Among the many types of Stochastic Approximations present in the literature, we are going to exploit the

Fluid Approximation (FA) tutorial; fluidmc, the Central Limit Approximation (CLA) vankampen; kurtz, and the System Size Expansion (SSE) schnoerr2016approximation. We are also going to use Moment Closure (MC) schnoerr2016approximation combined with distribution reconstruction techniques based on the Maximum Entropy principle andreychenko2015model.

Stochastic Approximations entered into the model checking scene only recently. Pioneering work focussed on checking CSL properties concur12; fluidmc; bertinoro13 or deterministic automata specifications HaydenTCS; HaydenTSE for a single random individual in a population. Following this line of work, more complex individual properties had been considered, in particular rewards qapl15 and timed automata with one clock formats15. Another direction of integration of stochastic approximations and model checking is related to the so called local-to-global specifications qest, in which individual properties, specified by timed automata (with some restrictions), are lifted at the collective level by counting how many agents satisfy a local specification. This lifting is obtained by applying the CLA to approximate the distribution of agents qest or by moment closure to obtain bounds HaydenTCS; HaydenTSE. A simpler approach, focussing on expected values at the collective level, is anjaL2G. Finally, stochastic approximation has been used also to approximate global reachability properties, either exploiting central limit results for hitting times epew, or by a clever discretisation of the Gaussian processes obtained by the CLA qest16.

Contributions. In this paper, we start from the approach of qest

for the approximation of satisfaction probabilities of local-to-global properties, and extend it in several directions:

  • We extend fluid model checking fluidmc to a subset of CSL-TA cslta, a logic specifying temporal properties by means of Deterministic Timed Automata (DTA) with a single clock. We consider in particular DTAs in which the clock is never reset, and provide a model checking algorithm also for nested formulae, leveraging fluid approximation.

  • We lift CSL-TA properties to the collective level, exploiting the central limit approximation, thus extending the approach of qest to a more complex set of properties. We also remove some restrictions on the class of models considered with respect to those discussed in qest.

  • We extend both qest and fluidmc by showing how to effectively use higher order approximations to correct for finite size effects. This requires to integrate within the model checking framework either moment closure or higher-order SSE schnoerr2016approximation, together with maximum entropy distribution reconstruction andreychenko2015model.

Throughout the paper, we make use of a simple but instructive running example of an epidemic model to illustrate the presented techniques.

Paper Structure. The paper is organised as follows. In Section 2, we introduce the class of models we consider, and in Section 3, the property specification language. Section 4 contains an introduction on stochastic approximation techniques. Section 5 shows how to model check local properties described by CSL-TA, while Section 6 deals with local-to-global properties. Conclusions are drawn in Section 7. The appendix contains novel proofs.

2 Markov Population Models

In this section, we introduce a formalism to specify Markovian Population Models. These models consists of typically large collections of interacting components, or agents. Each component is a finite state machine, which can change internal state by interacting with other agents or with the environment. Agents can be of different kinds or classes. Interactions are described by specifying which kinds of agents participate in the interaction and the rate at which it happens. The rate is a function of the collective state of the model, i.e. of the population size. This information, together with an initial state, describe the full population model and it defines a Markov chain in continuous time.

More specifically, an agent class defines its (finite) state space and its (finite) set of local transitions. In the following, the descriptor ’local’ refers to the fact we are formalizing the model at the agent level, whereas we use ’global’ to define state spaces and transitions when modelling the entire population.

Definition 2.1 (Agent class).

An agent class is a pair where is the state space of the agent and is the set of local transitions of the form , , where is the transition label, taken from the label set .

The label of a local transition may not be unique. Without loss of generality, we require that two local transitions having the same initial and final states must have different labels (if not, just rename labels). An agent belonging to the class

is identified by a random variable

, denoting the state of the agent at time , and the initial state .

R

I

S

Figure 1: The automaton representation of a network node.

Running Example. In order to illustrate the definitions and the techniques of the paper, we consider a simple example of a worm epidemic in a peer-to-peer network composed of nodes (see e.g. meanfieldP2P for mean field analysis of network epidemics). Each node is modelled by the simple agent shown in Figure 1, which has three states: susceptible to infection (S), infected (I), and patched/immune to infection (R). The contagion of a susceptible node can occur due to an event external to the network (), like the reception of an infected email, or by file sharing with an infected node within the network (). Nodes can also be patched, at different rates, depending if they are infected () or not (). A patched node remains immune from the worm for some time, until immunity is lost (), modelling for instance the appearance of a new version of the worm.

The agent class of the network node can be easily reconstructed form the automaton representation in Figure 1: its local states are , which we also denote as , and its local transitions are .

In the following, without loss of generality, we consider populations of agents , , all belonging to the same class with . All the results we will present in the following hold for models with multiple classes of agents, at the price of keeping an extra index to identify the class of each agent. We further make the classical assumption that agents in the same state are indistinguishable, hence the state of the population model can be described by collective or counting variables , , defined by . The initial state is given by , and the counting variables satisfy the conservation relation . To complete the definition of a population model, we need to specify its global transitions, describing all the possible events that can change the state of the system.

Definition 2.2 (Population model).

A population model of size is a tuple , where:

  • is an agent class, as in Definition 2.1;

  • is the set of global transitions of the form , where:

    • is the (finite) set of local transitions synchronized by ;

    • is the (Lipschitz continuous) global rate function.

  • is the initial state.

The rate gives the expected frequency of transition as a function of the state of the system. We assume equal to zero if there are not enough agents available to perform the transition. The synchronization set , instead, specifies how many agents are involved in the transition and how they change state: when occurs, we see the local transitions fire at the (local) level of the agents involved in .

Figure 2: Global transitions of the network epidemic model
Example 2.1.

The population model for the epidemic example has population variables . The initial conditions are simply a network of susceptible nodes, . The set , instead, specifies five global transitions: , detailed in Figure 2.
As an example, consider the transition encoding the external infection. Its synchronisation set specifies that only one susceptible node is involved and changes state from to at a rate given by , corresponding to a rate of infection per node. The transitions have a similar format, while the internal infection i involves one -node and one -node, having synchronization set . Furthermore, we assume that an infected node sends infectious messages at rate to a random node, giving a classical density dependent rate function epidbook.

Remark 2.1.

The population models of Definition 2.2 have a main restriction: the population size is constant. This limitation can be removed, as the approximation techniques we will exploit do not rely on it. However, extra care has to be put in treating local properties, as discussed in fluidmc.

Given a population model and a global transition with , we encode the net change in due to in the update vector , where

is the vector that is equal to

in position and zero elsewhere.

The CTMC associated with has state space

, initial probability distribution concentrated on

, and infinitesimal generator matrix Q defined for , , by

Equipped with this definition, we can analyse a model either by numerical integration of the Kolmogorov equations of the CTMC norris, or relying on stochastic simulation and statistical analysis of the sampled trajectories prism; jha2009statistical. The first approach is not feasible for large populations ( large), due to the severe state space explosion of this class of models. The second approach scales better, but is still computationally intensive, requiring many simulations, whose cost typically scales (linearly) with .

3 Individual and Collective Properties

We introduce now the class of properties considered in the paper. We distinguish two levels of properties: local properties, describing the behaviour of individual agents; and global properties, representing the collective behaviour of agents with respect to a local property of interest. In this respect, our approach is similar to anjaL2G; HaydenTCS.

Local properties are expressed in terms of a temporal logic. To improve expressiveness of the specifications, we go beyond CSL, as used in fluidmc; anjaL2G, and consider CSL-TA cslta, an extension of CSL in which path properties are specified by Deterministic Timed Automata (DTA) with a single clock. To rely on fluid approximation techniques, we consider here DTA in which the clock refers always to the global time, i.e it cannot be reset. DTA are used to specify time bounded properties, and we consistently restrict to time-bounded quantification in the CSL layer. This is justified because dealing with steady state properties is always problematic in the context of fluid approximation, see fluidmc; tutorial; HaydenTCS for further discussion on this point.

The global property layer allows us to specify queries about the fraction of agents that satisfy a given local specification. In particular, given a path or a state formula , we want to compute the probability that the fraction of agents satisfying is smaller or larger than a threshold . This is captured by appropriate operators, that can be then combined to specify more complex global queries, as in anjaL2G.

Let us fix a population model composed of agents from a class . Path properties are specified by a 1-global-clock Deterministic Timed Automata (1gDTA). The edges of the 1gDTA are labelled by a triple composed of: an action name, taken from the set of transition names of the population model; a boolean formula, interpreted on the states of agent ; and a clock constraint, specifying when the transition can fire. The use of actions and formulae to label edges is similar to asCSL ascsl, and deviates from the original definition of CSL-TA cslta. The intended meaning is that a transition matches an edge with label in the 1gDTA if and only if the action name is the same and the state satisfies the formula . This allows the specification of more complex path properties and provides a mechanism to nest CSL-TA specifications. Let us call this set of state propositions on , and call the set of boolean combinations over , denoting with the satisfaction relation over formulae, defined in the standard way. We use the letter to range over formulae in .

We consider a single clock variable , called global clock, never reset in time. Let be the set of valuations, i.e. functions that assign a nonnegative real-value to the global clock , and let the set of clock constraints, which are boolean combinations of basic clock constraints of the form , where . Finally, we write if and only if is satisfied when its clock variable take the value . We are now ready to define 1gDTA.

Definition 3.1 (1-global-clock Deterministic Timed Automaton).

A 1gDTA is specified by the tuple where is the label set of ; is the set of atomic state propositions; is the (finite) set of states of the DTA, with initial state ; is the set of final (or accepting) states, and is the edge relation, where is usually denoted by . The edges of further satisfy:

  1. (determinism) for each , and clock valuation , there is exactly one edge such that and .

  2. (absorption of final states) all final states are absorbing, i.e. all transitions starting from a final state are self-loops.

When we write a 1gDTA, we do not want to specify all possible transitions in the automaton. Hence, we assume that all non-specified edges are self-loops on the automata state. Specifically, given , , and , if there is no specified edge from state with label , with formula satisfied by and clock constraint satisfied by , then we assume the existence of an edge looping on and satisfying all conditions. The condition of determinism of Definition 3.1 can be easily enforced by considering 1gDTA that have additional restrictions on the edges, using only formulae , , which are true only in state , and requiring that two transitions with label have mutually exclusive clock constraints. We call these 1gDTA explicitly deterministic. All examples of properties in this paper will be of this kind, and it is easy to see that any 1gDTA can be converted into an equivalent explicitly deterministic one, by properly multiplying edges and splitting state formulae and constraints.

Example 3.1.

As an example, consider the agent class of the network epidemics shown in Figure 1, and the 1gDTA specification of Figure 3 (a), where the formula is true in local state and false in states and . The property is satisfied when a susceptible node is infected by an internal infection after the first units of time. The sink state is used to discard agents being infected before units of time. The use of the state formula allows us to focus only on agents that get infected, rather than also on agents that spread the contagion. We can describe more complex properties as the 1gDTA specification of Figure 3 (b). This automaton describes the fact that an agent is infected by an internal contact twice, the first infection happening between time 1 and 2, and the second infection happening before time 4. Also here, the sink state is used to discard agents being infected for the first time before time 1.

start

start


Figure 3: The 1gDTA specifications discussed in Example 3.1 .

A run of a 1gDTA is a sequence of states of , actions and times taken to change state, , such that clock constraints are satisfied. A run is accepting if .

Consider now a population model , and focus on a single individual agent of class in the population. A path of length for such an agent is a sequence of the form , where , is the time spent in the local state , and is the action taken at step . The set of those paths will be denoted by and the set of paths of finite length by . Given , we let be the total time taken to go from state to state , and with the time taken to reach state . The set of paths of total duration equal to is denoted by . Given a path of length , we define the run of a 1gDTA induced by to be , where state is determined by the unique transition such that and . If is accepting, we write .

Given a 1gDTA , we indicate it with when we want to explicitly list all the atomic propositions used to build the state propositions . This will be needed to define the local logic CSL-TA.

Definition 3.2 (Csl-Ta).

A CSL-TA formula on a agent class is defined recursively as

where is an atomic proposition interpreted on , , , , and is a 1gDTA with atomic formulae taken to be CLS-TA formulae .

This definition is similar to cslta, with the only difference of the use of a restricted class of DTA, and of the time bound on the probability operator . The satisfaction relation is defined relatively to state of an individual agent in of class and an initial time . The only interesting case is the one involving 1gDTA specifications, which requires a 1gDTA path property to hold with probability satisfying the bound :

We turn now to introduce global properties. Here, we will consider basic properties that lift local specifications to the collective level, looking at the fraction/ number of agents in the population that satisfy a local specification, given either as a state CSL-TA formula , or as a path property specified by a 1gDTA . More specifically, we ask ourselves if the fraction/ number of agents satisfying (resp. ) is included in the interval , which we denote by . This is a random event, hence we need to compute its probability. In atomic global properties, we will compare this probability with a given threshold. Therefore, we have two kinds of global atomic properties:

Path properties:

: the probability that the fraction/ number of agents that satisfy the local path property is in the interval is , for ;

State properties:

: the probability that the fraction/ number of agents that satisfy the local state property is in the interval is , for .

Both properties above can be checked at any starting time . Atomic global properties are then combined together by boolean operators, to define more expressive queries. We therefore define a collective or global property, as

Definition 3.3.

Given a population model , a collective/ global property on is given by the following syntax:

Example 3.2.

As an example, consider again the 1gDTA property of Figure 3(b). Then the atomic global property specifies that, with probability at least 0.8, no more than one third of network nodes will be infected twice in the 4 time units by an internal contact, where the first infection happens in between time 1 and 2.

Remark 3.1.

In Definition 3.2, we allow the arbitrary nesting of CSL properties within 1gDTA. By the discussion of fluidmc, this operation requires some care when we want to apply fluid approximations to estimate probabilities. The problem is that individual agents are time-dependent non-Markovian processes (in fact, they are projections/ marginal distributions of a Markov processes, the global model), for which the satisfaction of a CSL-TA formula involving the probability quantifier depends on the initial time at which the formula is evaluated. Hence, the satisfaction of a CSL-TA formula is a time-dependent function, while 1gDTA require time independent state formulae. This discrepancy can be reconciled by encoding this time dependency in the 1gDTA itself, using clock constraints. Hence, a state formula that is true in up to time 5 and false afterwards, will give rise to two edges in the 1gDTA, the first considering a state formula in which is true, and with an additional clock constraint , while the second corresponding to an edge with a state formula falsified by , and additional clock constraint .

More specifically, we consider a family of boolean state propositions , indexed by a time index , whose satisfaction value can change a finite number of times up to time . This means that the interval of interest can be partitioned into a finite interval cover , ,…, , such that the satisfaction of in each is constant, meaning that for each state and times , then if and only if . To reduce such automata to 1gDTA, the idea is to replace the edge with a collection of edges . This operation will be referred in the following as structural resolution of timed-varying properties. Once this is done, one has to check that the so-obtained DTA satisfies the determinism condition. This follows trivially if the original DTA is explicitly deterministic.


4 Stochastic Approximations

In this section, we briefly present several approaches to approximate a population model by a simpler system that allows us to keep the state space explosion under control. Probably the most widespread way to approximate population models is the mean field or fluid limit, which is usually accurate for large populations tutorial. In the mesoscopic regime, i.e. for populations in the order of hundreds of individuals in which fluctuations cannot be neglected, one can rely on the linear noise approximation, treating the state space as continuous and noise as Gaussian vankampen. As an alternative, when we are interested in moments of the population process, we can rely on a large gamma of moment closure techniques schnoerr2015.

Fluid Approximation

Given a population model , the Fluid Approximations provides an estimation of the stochastic dynamics of , exact in the limit of an infinite population. In particular, we consider an infinite sequence of population models, all sharing the same structure, for increasing population size (e.g. the network models with an increasing number of network nodes). To compare the dynamics of the models in the sequence, we consider the normalised counting variables (known also as population densities or occupancy measures, see tutorial for further details) and we define the normalized population models , obtained from by making the rate functions depend on the normalised variables and rescaling the initial conditions. For simplicity, we assume that the rate function of each transition satisfies the density dependent condition for some Lipschitz function , i.e. rates on normalised variables are independent of . Also the drift F of , that is the mean instantaneous change of the normalised variables, is given by and, thus, is independent of . The unique solution111The solution exists and is unique because is Lipschitz continuous, as each is.

of the differential equation

(1)

is the Fluid Approximation of the CTMC associated with , and has been successfully used to describe the collective behaviour of complex systems with large populations tutorial. The correctness of this approximation in the limit of an infinite population is guaranteed by the Kurtz Theorem kurtz; tutorial, which states that converges to zero (almost surely) as goes to infinity:

Theorem 4.1.

Suppose . For every finite time horizon :

Fast Simulation.

The mean field convergence theorem can be used also to approximate the behaviour of a random individual agent in a large population model. The idea is that, in the limit of large populations, the behaviour of individual agents becomes independent, and influenced from the rest of the system only through the solution of the mean field equation. This result is known in literature as fast simulation darling; boudec; fluidmc.

More formally, call the stochastic process of a random individual agent, with state space , and the CTMC associated with the population model. If we consider an individual agent conditional on the state of the full population model, we can write down its infinitesimal generator matrix as a function of . Formally this is obtained by computing the fraction of the total rate of a transition seen by a random agent in a given state , i.e. by dividing the total rate by the number of individuals in state . A more formal treatment for population models will be given in the next section.

For the moment, observe that in a finite population, and are not independent, and to track the behaviour of an individual agent, we need to solve the full model . However, the fast simulation theorem below proves darling that, in the limit of going to infinity, we can approximate the behaviour of by the agent , with state space and time-dependent infinitesimal generator matrix given by , with the solution of the fluid equation presented above:

Theorem 4.2.

For any ,

Linear Noise

While the Fluid Approximation correctly describes the transient collective behaviour for very large populations, it is less accurate when one has to deal with a mesoscopic system, meaning a system with a population in the order of hundreds of individuals and whose dynamics results to be intrinsically probabilistic. Indeed, the (stochastic) behaviour becomes increasingly relevant as the size of the population decreases. The technique of Central Limit Approximation (CLA), also known as Linear Noise Approximation, provides an alternative and more accurate estimation of the stochastic dynamics of mesoscopic systems. In particular, in the CLA, the probabilistic fluctuations about the average deterministic behaviour (described by the fluid limit) are approximated by a Gaussian process.

Two equivalent approaches can be followed to introduce the Central Limit Approximation: the more intuitive van Kampen’s one in vankampen, known as System Size Expansion (SSE), and a more rigorous derivation by Kurtz in kurtz, exploiting the theory of stochastic integral equations. In both these approaches, the idea is to focus on the process , capturing the rescaled fluctuations of the Markov chain around the fluid limit. Then, by relying on convergence results for Brownian motion, one shows that , for large populations , can be approximated vankampen; kurtz by the Gaussian process222A Gaussian process

is characterised by the fact that the joint distribution of

is a multivariate normal distribution for any

. (independent of ), whose mean and covariance are given by

(2)

and

(3)

where denotes the Jacobian of the limit drift F calculated along the deterministic fluid limit , and is called the diffusion term.

The nature of the approximation of by is captured in the following theorem kurtz.

Theorem 4.3.

Let be the Gaussian process with mean (2) and covariance (3) and be the random variable given by . Assume that . Then, converges in distribution to .333Formally, , i.e. the convergence in distribution is of to Z, seen as random variables taking values in the space of cadlag functions from to .

The Central Limit Approximation then approximates the normalized CTMC associated with by the stochastic process

(4)

Theorem 4.3 guarantees its asymptotic correctness in the limit of an infinite population.

In the derivation of Van Kampen vankampen, the idea is to start from the master equation and introduce an expansion around a small parameter given by the inverse of the system size. In this way, truncating at the lowest order, one obtains that fluctuations obey a linear Fokker-Plank equation, whose solution is the Gaussian Process described by the mean and covariance function introduced above. The advantage of this derivation is that one can keep higher orders in the expansion grima2010, obtaining a non-linear Fokker-Plank, which cannot be solved analytically, but which can be either integrated numerically, or from which equations for moments can be extracted. These equations have higher-order correction terms depending on system-size, which vanish in the thermodynamic limit, collapsing to mean field. We do not present the derivation here in detail, but refer the interested reader to the appropriate papers. It is worth mentioning that this approach has been implemented in the tool iNA ina and in the matlab toolbox CERENA CERENA.

Moment Closure

The class of approximation techniques known as moment closure schnoerr2016 is a viable alternative to mean field, linear noise, or SSE, when the interest is on the moments of the population process for a finite population rather than on the limit behaviour for large population sizes. These methods start from a general ODE for the moments of a stochastic process, known as Dynkin formula kallenberg. If is any sufficiently smooth function with domain , then

In the formula above, are the transitions of a population model, each with rate and with update vector , see Section 2. Starting from this formula, one can easily obtain the exact ODEs for (non-centered) moments, by using a suitable polynomial expression for . For instance, for one can obtain the mean of population , with one obtains the second moment of population

, from which variance can be computed as

, and so on. It is useful to see what happens when is the vector valued identity, , giving the mean for all variables. In this case, the ODE above becomes

hence the right hand side depends on the expectation of the rate functions with respect to the state of the Markov process at time . If all rate functions are linear, then the previous equation is closed, i.e. depends only on the mean . However, when the rate functions are non-linear polynomials, like in the epidemic example, or more complex non-linear functions, then this is no more the case. For example, in the epidemic model of Section , we can observe how the infection rate is , hence giving rise to a term in the ODE for the mean equal to , i.e. the covariance between and . This is the leit motiv of non-linearity: differential equation for a moment of order will depend on moments of higher order, thus giving rise to an infinite dimensional ODE system. The whole business of moment closure is to find clever ways to truncate these infinite dimensional ODEs to obtain a finite dimensional set of equations up to order to be solved by numerical integration. In literature, there are different techniques to close the moment equations. Typically they impose some condition that is satisfied by a family of probability distributions, from moments of order on (e.g. normal, log-normal, low dispersion), but they may also try to match the derivatives to those of the true equations, or use other ideas, see schnoerr2014; schnoerr2016; CERENA for a presentation of different moment closure strategies. Typically, the higher the order of truncation, the more accurate will be the estimate of lower order moments, like the mean. This is not always true, as the accuracy of moment closures depends on the system under consideration in complex ways, see schnoerr2014 for a discussion in this sense. In the paper, we make use of the low dispersion moment closure, which can be obtained by setting the centred moments to zero from order on, see schnoerr2016; CERENA .

5 Model Checking CSL-TA Properties on Individual Agents

In this section, we show how to check CSL-TA formulae on individual agents. The challenging task is that of computing path properties for 1gDTAs, restricting to constant atomic propositions (in the light of Remark 3.1). In this paper, we will consider efficient approximations of the path probability based on fluid approximations (Section 5.1). The computation of path probabilities starts from the construction of an enriched agent, obtained by a product construction between the agent model with the automata of the properties (Section 5.1.1). Then, we show how to compute path probabilities from a fixed initial time (Section 5.1.3), and then as a function of the initial time (Section 5.1.4). The next step is to provide an approximate model checking algorithm for CSL-TA (Section 5.2), discussing its decidability and the convergence to the exact value for large populations (Section 5.3). Finally, we show how to improve the accuracy of the approximation for finite populations, exploiting higher order moment closure techniques (Section 5.4).

5.1 Computing path probabilities

5.1.1 Synchronisation of agents and 1gDTAs.

The first step in the computation of path probabilities is to synchronize the agent and the property, constructing an extended Markov population model in which the state space of each agent is combined with the specific path property we are observing. The main difficulty in this procedure is the presence of time constraints in the path property specification. However, thanks to the restriction to a single global clock, we can partition the time interval of interest into a finite set of subintervals or regions, within which no clock constraint changes status. Thus, in each region, we can remove the clock constraints, deleting all the edges that cannot fire because their clock constraints are false. In this way, we generate a sequence of Deterministic Finite Automata (DFA), that are then combined with the local model by a standard product of automata.

Let be an agent class, be a local path property, and be the time horizon.

First step: enforcing uniqueness of transition labels. We define a new agent class by renaming the local transitions in to make their label unique. This allows us to remove edge formulae in , simplifying the product construction. In particular, if there exist having the same label , we rename them by , obtaining . The 1gDTA is updated accordingly, by substituting each edge with the set of edges , for . We call the label set of .

Second step: removal of state conditions. We remove from the edge relation of all the edges such that , where is the source state of the (now unique) transition of labeled by . At this point, the information carried by state propositions becomes redundant, thus we drop them, writing in place of .

Third step: removal of clock constraints. Let be the ordered sequence of constants (smaller than ) appearing in the clock constraints of the edges of . We extend this sequence by letting and . Let , , be the -th sub-interval of identified by such a sequence. For each , we define a Deterministic Finite Automaton (DFA), , whose edge relation is obtained from that of by selecting only the edges for which the clock constraints are satisfied in , and dropping the clock constraint. Hence, from such that whenever , we obtain the DFA edge , denoted also by .

Fourth step: synchronization. To keep track of the behaviour of the agents with respect to the property specified by , we synchronize the agent class with each DFA through the standard product of automata. The sequence of deterministic automata obtained in this procedure is called the agent class associated with the local property .

Definition 5.1 (Agent class associated with the local property ).

The agent class associated with the local property is the sequence of deterministic automata , , where is the state space and is the set of local transitions , such that is a local transition in and is an edge in .

Example 5.1.

As an example, in Figure 4, we show the synchronisation steps of the SIR automata described in Figure 1, and the 1gDTA specification of Figure 3 (a). After the first step, Figure 4 (a), the new agent class of the network node has local transitions . The 1gDTA is updated accordingly, by substituting the edge with the set of edges , for . We remove then, Figure 4 (b), the redundant state conditions of . In the third step, Figure 4 (c), we remove the clock constraints. For the considered 1gDTA we have two time intervals and . We define then two DFAs: and . Finally, Figure 4 (d), we synchronise the agent class with each DFA.

I

R

S

start

,

,
(a) Enforcing uniqueness of transition labels

start