A Computational Framework for Adaptive Systems and its Verification

by   Yehia Abd Alrahman, et al.

Modern computer systems are inherently distributed and feature autonomous and collaborative behaviour of multicomponent with global goals. These goals are expressed in terms of the combined behaviour of different components that are usually deployed in dynamic and evolving environments. It is therefore crucial to provide techniques to generate programs for collaborative and adaptive components, with guarantees of maintaining their designated global goals. To reach this endeavour, we need to extend modelling formalisms and specification languages to account for the specific features of these systems and to permit specifying both individual and system behaviour. We propose a computational framework to allow multiple components to interact in different modes, exchange information, adapt their behaviour, and reconfigure their communication interfaces. The framework permits a local interaction based on shared variables and a global one based on message passing. To be able to reason about local and global behaviour, we extend LTL to consider the exchanged messages and their constraints. Finally, we study the computational complexity of satisfiability and verification considering these extensions.



There are no comments yet.


page 1

page 2

page 3

page 4


Modelling and Verification of Reconfigurable Multi-Agent Systems

We propose a formalism to model and reason about reconfigurable multi-ag...

Bounded verification of message-passing concurrency in Go using Promela and Spin

This paper describes a static verification framework for the message-pas...

A type language for message passing component-based systems

Component-based development is challenging in a distributed setting, for...

Global Types for Open Systems

Global-type formalisms enable to describe the overall behaviour of distr...

A Spatial Logic for a Simplicial Complex Model

Collective adaptive systems (CAS) consist of many heterogeneous componen...

From Organisational Structure to Organisational Behaviour Formalisation

To understand how an organisational structure relates to organisational ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The advent of the class of autonomous and collaborative systems, where multiple components interact and combine their local behaviour to reach global goals, has changed the perspective of how to design concurrent and distributed systems. Thus, a system can no longer be designed in terms of individuals interacting with their environments, but rather as a set of collaborative components with complimentary tasks. The main hurdle when dealing with such systems is that global goals are not expressible in terms of the knowledge of individuals and different components have to collaborate and exchange information to reach their global goals. These systems are inherently distributed and exhibit complex interaction, e.g., supply chains, power grids, etc. Thus, it is crucial to provide techniques to support the generation of programs for collaborative components, by supporting modelling and verification.

In the area of formal modelling and verification, message-passing [milnerpi, psi] and shared variables [AH99b] based formalisms are dominant and they are usually considered the viable tools when dealing with distributed systems. As the name suggests, message-passing approaches completely characterise processes in terms of their capabilities to interact and abstract accessing/manipulating local states in terms of invisible interactions. Thus, a process cannot instantaneously change its local state and adapt its behaviour while engaging in interaction. On the other hand, shared-variables approaches completely characterise processes in terms of the sequence of updates to the variables in their states and abstract from explicit message-exchange. A local state of a process contains variables that are shared with other components. Thus, updates to these variables are used to model message exchange. Although in some approaches (e.g., see in [DBLP:conf/concur/FisherHNPSV11]) a process can hide/reveal these variables, it is still not able to select on which basis these variables are shared and how to coordinate this sharing.

Here, we present a framework to model interactions in reconfigurable and adaptive systems. We model a system as a set of components executing independently and only influence the behaviour of each other by means of message-exchange. Each component has its own state consisting of a set of local variables whose values change as side-effects of interaction. Components are equipped with dynamic communication interfaces that are parametric to their local states. They have the ability to characterise the set of receivers by means of predicates and also determine coordination mechanisms. Furthermore, messages transmit data from the local state of senders to the local states of receivers.

We distinguish between a local behaviour represented by changes in the values of a component local variables and a global one represented by explicit message-exchange. This representation naturally captures the structure of actual distributed systems where a component represents a machine with a local memory and a (possibly multi-threaded) program manipulating this memory. Threads have instantaneous memory access and they coordinate by means of synchronisation while machines are distributed and interact by exchanging messages. In our framework, only message-exchange is counted as a transition in the underlying labelled transition system and local behaviour is abstracted and captured as instantaneous side effects of message exchange. Thus, components can manipulate their states instantaneously while engaged in interaction. Furthermore, message exchange might trigger new behaviours instantaneously. This clear separation between a local and a global behaviour makes it easy to reason on either one. Our framework can be considered as a generalisation of the work on the AbC calculus [forte16, info] where we enrich it with different interaction mechanisms in a clean and compact way. Also our framework is symbolic (i.e., states are interpretations of variables and transitions are variable updates) and thus more convenient for analysis rather than for programming. The message-passing mechanisms in our framework are unique and provide a fine control on what information is shared, when, how, and with who.

To be able to reason about local and global behaviour, we extend LTL to consider the exchanged messages and their constraints. The core extension is not merely referral to message contents, which can be done by considering a richer alphabet but rather by considering the constraints that senders impose on possible receivers. We also study the computational complexity of satisfiability and verification considering these extensions.

The paper is structured as follows: In Sect. 2, we unify notations and give the necessary background. In Sect. 3, we informally present our model and motivate our design choices while in Sect. 4 and Sect. 5 we formally introduce the model and our extension to LTL. In Sect. 6, we study the satisfiability and the verification problems of our extension and finally in Sect. 7 we report related works and conclude the paper highlighting future directions.

2 Transition systems and Discrete Structures

A Doubly-Labeled Transition System (TS) is , where is a state alphabet, is a transition alphabet, is a set of states, is a set of initial states, is a transition relation, and is a labeling function.

A path of a transition system is a maximal sequence of states and transition labels such that and for every we have . We assume that for every state there are and such that . Thus, a sequence is maximal if it is infinite. If then is a state-labeled transition system and if then is a transition-labeled transition system.

We introduce Discrete Systems (DS) that represent state-labeled transition systems symbolically. A DS is , where the components of are as follows:

  • : A finite set of typed variables. Variables range over discrete domains, such as Boolean or integers. A state is an interpretation of , i.e., if is the domain of , then is an element in .

  • : The initial condition. This is an assertion over characterizing all the initial states of the DS. A state is called initial if it satisfies .

  • : A transition relation. This is an assertion , where is a primed copy of the variables in . The transition relation relates a state to its -successors , i.e., , where supplies the interpretation to the variables in and supplies the interpretation to the variables in .

A DS gives rise to a state-labeled transition system , where and are the set of states of , is the set of initial states, and is the set of triplets such that . The number of states of is usually exponentially larger than the description of the DS. The paths of are the paths of .

To the best of our knowledge it is not common to represent doubly-labeled transition systems or transition-labeled transition systems as discrete systems. One way to do that would be to include multiple transition relations corresponding to every letter in . Another way, which we adapt and extend below, would be to include additional variables that encode the transition alphabet. Given such a set of variables , an assertion characterizes the triplets such that , where supplies the interpretation to , to and to . We are not aware of an implementation of the second approach to encode (very) large transition alphabets.

3 Overview: Reconfigurability and Adaptivity

We propose a variant of discrete structures that models a system as a set of independent components executing concurrently and only influence the behaviour of each other by means of message exchange. Message exchange is adaptable and reconfigurable in a way that a component changes its state due to messages influenced by the state of different components. To illustrate the distinctive features of our variant, we model a distributed resource allocation scenario where a cloud infrastructure provides computing virtual machines (VMs) to clients. The infrastructure consists of either high or standard performance VMs and a resource manager that allocates VMs to clients. The manager commits to provide high performance VMs to clients, but when all of these machines are reserved, the clients are assigned to standard ones. The manager only acts as an interface to route clients to VMs anonymously and then the interactions between VMs and clients proceed independently on links that change their utility based on the needs of communication at a given stage.

First, as in the spirit of DS, each component has its own local state consisting of a set of local variables whose values change as side-effects of interaction. From an external point of view, only messages emitted by a component represent its external behaviour while changes to its state variables represent the local one. Technically, every component has a send and a receive transition relation. Both relations are defined over variables and primed variables but include additional components, to be further explained below.

In our example, a client has the following set of local variables , where is a program counter, is a common link used to interact with the resource manager, is a placeholder for a link name that can be learnt at run-time, and is the run-time role of the client (in our example the role is fixed). Intuitively, the initial condition of a client is: specifying that initially is at location , the resource manager is reachable at channel , no mobile link is assigned () and the role is .

The communication interfaces of components are parameterized to their local states and when states change the set of communicating partners might change, creating dynamic and opportunistic interactions. For instance, when is set to , the client discards all messages on ; also when a run-time channel is assigned to , the client starts receiving messages on that channel. Components interact either based on anonymous broadcast (with non-blocking send and blocking receive) or based on channeled multicast (with blocking send and receive). Thus, the agreed set of channels ch includes the broadcast channel .

Every message includes a predicate specifying conditions on the states of receivers of the message. A receiving component that satisfies the predicate can receive the message and a receiving component that does not satisfy the predicate cannot receive the message. In a broadcast, receivers (if exist) may anonymously receive the message when they are interested in its values (and when they satisfy the send predicate). Otherwise, a component may not participate in the interaction. In multicast, all components listening on the multicast channel must participate to enable the interaction.

Broadcast is used when components are unaware of the existence of each other while (possibly) sharing some resources while multicast is used to capture a more structured interaction where components have dedicated links to interact. The idea is that initially components share a limited and finite set of channels. These channels can be reserved or released by means of broadcast messages and thus a structured communication interface can be built at run-time, starting from an initial and (possibly) flat structure. In our example, clients are not aware of the existence of each other while they share the resource manager channel . Thus they may coordinate to use the channel anonymously by means of broadcast. A client reserves the channel by means of a broadcast message with a predicate targeting components with a client role. Other clients disconnect from and wait for a release message.

Messages can be used by components to specify what information is shared, when, how, and with whom. Namely, the values in the message specifies what is exposed to the context; changes of specific state variables specifies when a message is emitted; the channel specifies how to coordinate with others; and the send guard specifies who is targeted. Targeted components use incoming messages as a mean to update their states, reconfigure their interfaces, and/or adapt their behaviour.

Accordingly, each message carries an assignment to a set of data variables d. Thus, the send and receive transition relations are parameterized by the current and primed variables of the component, the data variables transmitted on the message, and by the channel name. The send and receive transition relations of a client are parameterized by the data variables and and the channel (through the variable ) and reported below:

Namely, a client initially (i.e., ) broadcasts a message to inform other clients to disconnect channel (stored in local variable ) and the counter advances. Now the client uses to send a request to the resource manager. The update of the from to is a side effect of a message receipt. Then, when , the client releases . Lastly, the client buys a service from the VM on a dedicated link sent by the VM during interaction (stored in local variable ), release the link, and resets its .

The first two disjuncts of the receive transition relation above state that when a client receives a broadcast message it disconnects channel if the message is and connects it otherwise. Lastly, a dedicated channel name is received from a VM and is assigned to .

Note that the send and the receive transition relations specify when, what and how the information is shared between components, but they do not specify who is involved. Thus, we add two more features. First, components have send guards parameterized by their variables and by a set of common variables cv (variables that each component has a local copy of). In a given state of the sending component, the guard specifies what are the possible assignments to the common variables in the components for whom the message is destined. Second, components have receive guards parameterized by their own variables that determine when a component is ready to receive on a given channel.

In our example, the send guard of a client is of the form:

Namely, broadcasts are destined to components assigning to an equivalent value of the current value of the variable (of the sender), i.e, ; messages on are destined to components assigning to ; messages on are destined to everyone (i.e., the predicate is true). Each component has a relabelling function that is applied to the send guard once a message is received to check its truth. In our example, .

The receive guard is of the form: Namely, reception is always enabled on broadcast and on a channel that matches the value of the variable. Note that these guards are parameterized to local variables and thus may change at run-time, creating a dynamic communication structure.

We may now specify the manager and the virtual machines behaviour and show how our multicast interaction can be used to model a point-to-point one in an easy and clean way.

The resource manager has the following local variables: , where and store channel names to communicate with high and standard performance VMs respectively and the rest are as defined before.

The initial condition is:

The send guard for a manager is always satisfied, (i.e., is ) while the receive guard specifies that a manager only receives broadcasts or on channels that match with values of or variables, i.e., is . The send and receive transition relations are reported below:

In summary, the manager forwards requests received on channel to the high performance VMs first and if they are fully occupied the requests are forwarded to the standard performance ones. Clearly, the specifications of the manager assumes that there are a plenty of standard VMs and a limited number of high performance ones. Thus it only expects a message to be received on channel . Note also that the manager gets ready to handle the next request once a connect message () is received on channel and leaves the client and the selected VM to interact independently.

The virtual machine has the following local variables where indicates if the VM is assigned, is a group link, is a private link and the rest is as before; apart from and , which are machine dependent, the initial condition is of the form: where initially virtual machines are not listening on the common link . The send guard for a VM is always satisfied, (i.e., is ) while the receive guard specifies that a VM always receives on and only receives on and on when it is either assigned () or idle (), i.e., .

The send and receive transition relations are reported below:

Intuitively, a VM receives a message on the group channel and thus activating the common link and also a nondeterministic choice between and messages. A VM sends with its private link on if it is not assigned or sends on otherwise. Note that message can only go through if all VMs in group are assigned (the receiver guard of a VM accepts a full message only when it is assigned). Furthermore, a message will also be received other VMs in the group . As a result, all other available VMs (i.e., ) in the same group do not reply to the request. Thus, one VM is non-deterministically selected to provide a service and a point-to-point like interaction is achieved. Note that this easy encoding is possible because components change communication interfaces dynamically by simply enabling and disabling communication channels instantaneously at run-time.

Clearly, our framework supports the essential modes of interaction in a clean and compact way. Note that combining broadcast and multicast is not a random choice; indeed any other possible pairing of the three modes would not be sufficient to model all of them.

4 The ReCiPe: Reconfigurable Communicating Processes

In this section, we proceed by formally presenting the computational framework and its main ingredients. We start by specifying components and their local behaviours and then we show how to compose these local behaviours to generate a global (or a system) one. We assume that components rely on a set of common variables cv, a set of data variables d, and a set of channels ch containing the broadcast channel .

Definition 1 (Component)

A component is , where:

  • is a finite set of typed local variables, each of them ranging over a finite domain. A state is an interpretation of , i.e., if is the domain of , then is an element in . We use to denote the primed copy of and to denote the assertion .

  • is a function, associating common variables to local variables. We freely use the notation for the assertion .

  • is an assertion specifying the set of receivers. That is, the predicate, obtained from after assigning and , is checked against every receiver after applying .

  • is an assertion describing the readiness of a component to receive on channel . We let , i.e., every component is always ready to receive a broadcast. We note, however, that receiving a broadcast could have no effect on a component.

  • is an assertion describing the send transition relation.

  • is an assertion describing the receive transition relation. We assume that a component is broadcast input-enabled, i.e.,.

  • is an assertion on describing the initial states of a component, i.e., a state is initial if it satisfies .

Components interact by message exchange. A message is characterised by the channel it was sent on, the data it contained, the sender identity, and the assertion restricting the receivers based on the values of their common variables (with renaming). Formally:

Definition 2 (Obsevation)

An observation is of the form , where is a channel, is an assignment to d, is an identity, and is a predicate obtained from for the component , where . That is, is the predicate over cv. Intuitively, is obtained from by assigning and for the sender . We interpret as a set of assignments to common variables cv. We freely use to denote either a predicate over cv or its interpretation, i.e., the set of variable assignments such that .

A set of components that agree on the sets of common variables cv, data variables d, and channels ch define a system. We define a doubly-labeled transition system capturing the interaction and then give a DS-like symbolic representation of the same system.

Let be the set of possible observations. That is, let ch be the set of channels, the product of the domains of variables in d, the set of component identities, and the set of predicates over cv then . In practice, we restrict attention to predicates in that are obtained from by assigning to and the identity and a state of some component.

Let denote and let . Given an assignment we denote by the projection of on .

Definition 3 (Transition System)

Given a set of components, we define a doubly-labeled transition system , where and are as defined above, , are the states that satisfy , is the identity function, and is as follows.

A triplet , where , if the following conditions hold:

  • For the sender we have that , i.e., is obtained from by assigning the state of and the channel , and evaluates to .

  • For every other component we have that either (a) , , and all evaluate to , (b) evaluates to and , or (c) , evaluates to and . By we denote the assignment of by the value of in .

Intuitively, an observation labels a transition from to if the sender determines the predicate (by assigning and in ) and the send transition of is satisfied by assigning , and to it, i.e., the sender changes the state from to and sets the data variables in the observation to and all the other components either (a) satisfy this condition on receivers (when translated to their local copies of the common variables), are ready to receive on (according to ), and perform a valid transition when reading the data sent in , (b) are not ready to receive on (according to ) and all their variables do not change, or (c) the channel is the broadcast channel, the component does not satisfy the condition on receivers, and all their variables do not change.

We turn now to define a symbolic version of the same transition system. In order to do that we have to extend the format of the allowed transitions from assertions over an extended set of variables to assertions that allow quantification.

Definition 4 (Discrete System)

Given a set of components, a system is defined as follows: , where and and a state of the system is in . The transition relation of the system is characterised as follows:

The transition relation relates a system state to its successors given an observation . Namely, there exists a component that sends a message (an assignment to d) with assertion (an assignment to ) on channel such that all other components satisfying and ready to receive on channel (i.e., ) get the message and perform a receive transition. As a result of interaction, the state variables of the sender and the receivers might be updated. Note that components that are not ready to receive (i.e., ) do not participate in the interaction and stay still. Thus, a blocking multicast arises where a sender is blocked until all ready receivers satisfy . The relation ensures that, when sending on a channel that is different from the broadcast channel , the set of receivers is the set of ready components. In case of broadcast, namely when sending on , components are always ready to receive and the set of receivers not satisfying do not block the sender.

The translation above to a transition system gives rise to a natural definition of a trace, where the information about channels, data, senders, and predicates is lost. We extend the definition of trace to include this information as follows:

Definition 5 (System trace)

A system trace is an infinite sequence of system states and observations: : , , and:

That is, we use the information in the observation to localize the sender and to specify the channel, data values, and the send predicate.

We can show that the traces arises from Definition 5 and the paths of the doubly-labeled transition system in Definition 3 are the same.

Lemma 1

Given a set of components their system traces are the paths of the induced doubly-labeled transition system.


The conditions on the sender in the transition system are that the send transition of the sender hold corresponding to the conjunct for the sender . The conditions on the receivers in the transition system correspond to the three different disjuncts in the definition of the system trance (and of the discrete system):

  • the receive transition of the receiver holds (), the receiver is interested in the channel (), and by quantifying the values of the common variables separately for each and requiring and effectively we require that the predicate obtained from , when translated to local variables of component , holds over ’s copy of the local copies of the common variables.

  • the receiver is not interested in the channel (), which means that the component does not change its local state variables.

  • only in a broadcast, the receiver is not an intended recipient of the message () and hence does not change its local state variables.

Finally, in order to be able to translate the logic to automata over infinite words in the next section we view system traces as words over an alphabet that consists of the state labels and the system labels together.

Definition 6 (System computation)

A system computation is a function from natural numbers to where is the set of state variable propositions and is the set of observation propositions. That is, assigns truth values to elements of at each time instant. Thus, computations can be viewed as infinite words over the alphabet .

5 Linear Time with observation descriptors Logic (LTOL)

In this section, we propose Linear-time with observation descriptors logic (ltol), an extension to ltl with the ability to refer to the contents of observations. This extension is needed in order to reason about the observations (mostly about the intended set of receivers). Namely, we replace the next operator of ltl with two other operators: possible () and necessary (), both referring to the contents of the message. The syntax of ltol in Table 1 includes observation descriptors and temporal formulas . The syntax is presented in positive normal form which will come in handy when translating ltol formula into an alternating Büchi automaton as shown later. We use the usual abbreviations of , and the usual definitions for and . To simplify presentation we assume that all variables are Boolean. Clearly, every finite domain can be encoded by multiple Boolean variables. For the purposes of finite-state model checking (see Section 6) this is sufficient.

Table 1: Syntax

Observation descriptors are built from referring to the different components of the observations, namely, the variables in cv and d, the channels ch and the sender. In addition, they allow Boolean combinations and, as the predicates in the observation describe a set of possible assignments to the common variables, we include existential and universal common-variable assignment restrictors. Thus, observation descriptors describe a set of possible observations and both next operators and use them to refer to the observation and the next state. In the following, we use to denote the dual of formula where ranges over either or . Intuitively, is obtained from by switching and and by applying dual to sub formulas, e.g., , , and .

Table 2: Semantics of Observation Descriptors

The semantics of observation descriptors (omitting Boolean connectives) is defined in Table 2. The descriptor requires that at least one assignment to the common variables in the sender predicate satisfies . Dually requires that all assignments in satisfy . Using the former, we express properties where we require that the sender predicate has a possibility to satisfy while using the latter we express properties where the sender predicate can only satisfy . For instance, both observations and satisfy while only the latter satisfies . For example, the observation descriptor says that a message is sent on the broadcast channel with a false predicate. That is, the message cannot be received by other components. In the context of the example in Section 3, the descriptor says that the message is indented for clients and only for clients. We use this as part of a descriptor in an example below. The intention is that formulas can talk about the information that was sent. This enables the logic to ensure that components got “enough information” to allow collaboration. Note that, as the semantics suggests, when and are nested and/or mixed, only the top one matters and the rest are neglected, e.g., is equivalent to . Thus, we assume that observation descriptors are written in this normal form.

Given a system computation (as in Definition 6) we define the semantics of when a formula is satisfied in location of the computation. By we denote the system state occurring at the -th time point of the system computation. We denote the suffix of starting with the -th state by and we use to denote the observation in at time point . The semantics is defined in Table 3 for a computation and a time point .

Table 3: Semantics of Formulas

The temporal formula states that a computation at point has a possibility of satisfying iff its observation satisfies and is satisfied in the next point while states that if satisfies then is necessarily satisfied in the next point . For instance, no matter what is not satisfiable while might be.

We also introduce the usual temporal abbreviations (eventually) and (globally).

Note that ltol formulas can be used to localise individual components. For instance, consider the behaviour of a client introduced in Section 3 and the formula below (with non-boolean variables for convenience):

The formula expresses the fact that at any time-point of the computation, every client may utilise channel (if available) only after reserving by a broadcast to components with common variables . Accordingly, all components with client role remain disconnected from until client eventually releases it. Note that the conjunction of the observation descriptors and ensures that the sender guard in the message is exactly . The former states that there exists an assignment of the sender guard that satisfies while the latter ensures that all assignments of the sender guard satisfy .

The rest of the section is dedicated to translating ltol formulas to Büchi automata and the complexity of the translation. As mentioned in Definition 6, computations can be seen as infinite words on the alphabet , the following theorem states that the set of computations satisfying a given formula are exactly the ones accepted by some finite automaton on infinite words. The theorem and its proof are based on the construction presented in [vardi].

Before we proceed with the theorem, we fix the following notations: given a tuple , we will use the notation to return the element of and to return the tuple after replacing its element with .

Theorem 5.1

Given an ltol formula , one can build an alternating Büchi automaton