Abstract State Machines (ASMs) have been used since their discovery in the 1990s to model sequential, parallel and concurrent systems (see [10, Ch. 6, 9]). For sequential systems the celebrated sequential ASM thesis  provides an elegant theoretical underpinning showing that every sequential algorithm as stipulated by three simple, intuitive postulates are captured by sequential ASMs. This was generalised in  for (synchronous) parallel systems111The behavioural theory proven by Ferrarotti et al. simplifies the previously developed parallel ASM thesis [4, 5] by exploiting the idea of multiset comprehension terms in bounded exploration witnesses, which was stimulated by previous research on non-deterministic database transformations [23, 24]., and in  for asynchronous concurrent systems222This closed the gap in the behavioural theory of concurrent systems, as the definition of partially ordered runs in  was generally considered to be insufficient. There are examples of concurrent systems satisfying the intuitively clear property of sequential consistency  without having partially ordered runs., in which a concurrent system is defined by a family of algorithms assigned to agents that is subject to a concurrency postulate333Though the proof of the concurrent ASM thesis was first only conducted for families of sequential algorithms, the generalisation to families of parallel algorithms does not cause serious difficulties as sketched in ..
This characterisation can be applied to many different models of concurrent computation (see e.g. [1, 3, 12, 18, 25, 26]). While the thesis has been grounded on the assumption that shared locations are used, it was shown in  that the theory remains valid, if different agents can only interact via messages, i.e. sharing is restricted to mailboxes. This includes the case of a strict separation between memory managing agents and other agents that can only access the shared memory by sending query and update requests to the memory agents. Naturally, the expectation of an agent sending a request to a memory management agent is that the result is the same as if there had been a concurrent run with agent executing its request directly on the shared locations.
However, as observed in  this expectation can be violated by relaxed shared memory management, in particular in combination with conflicting updates or data replication. Regarding conflicting updates, if agents and try simultaneously to update the value at a location to some new values and , respectively, with , a concurrent run would be discarded. Instead of this, a memory management agent receiving update requests from and might randomly choose one of the possible values or , so for one of the agents there would be a lost update and its latest step would not have been executed properly.
Therefore, in  it is requested that “if the effects of such techniques are behaviourally relevant for the problem …, then this should be described by a precise model through which a programmer can understand, check and justify that the code does what it is supposed to do”. For the random selection such a precise model can be easily defined by a memory agent with the following rule444Here a shared (set-valued) function symbol denoting an update set is used.:
FORALL WITH CHOOSE IN
If this behaviour is known, one can specify what an agent should do in case of possibly lost updates, e.g. use strict transactions to avoid the situation to arise555See e.g.  for an ASM specification of concurrent systems with transactional behaviour., or integrate algorithms for mutual exclusion [2, 17], or ensure that an agent only continues after the memory agent (considered as part of the environment ) has terminated its step, etc.
In this article we investigate the case of replicated data maintained by a memory management subsystem, where the replication should not appear in the requests nor in the corresponding answers, which is a standard requirement in distributed databases with replication [19, Chap.13, pp.459ff.]. Consider first an example of undesirable behaviour with four agents having the following respective programs (where ; denotes sequential execution, and means to read the value and to copy it to some output):
Then there is no concurrent run where (1) initially , (2) each agent makes once each possible move, (3) prints , and (4) prints . However, if and are replicated, say that there are always two copies, and an update by the programs or affects only a single copy, such a behaviour will indeed be enabled.
Therefore, our objective is that such behaviour must be understandable from the specification so that the developer of the system can see, whether the consequences of such behaviour can be tolerated or additional consistency assurance measures have to be included. We assume a rather simple model, where shared data is logically organised in relations with primary keys, and data can only be accessed by means of the primary key values. We further assume relations to be horizontally fragmented according to values of a hash function on the primary key values, and these fragments are replicated. Replicas are assigned to different nodes, and several nodes together form a data centre, i.e. that are handled by one dedicated data management agent666This is similar to the data organisation in the noSQL database system Cassandra , but otherwise the Cassandra system is of no importance for this article. We may think of nodes as storage units and of data centres as physical machines managing them..
In addition, values in replicas carry logical timestamps set by the data centres and stored in the records in the nodes of the memory management subsystem. We further adopt Lamport’s simple approach for the maintenance of timestamps , which basically advances a clock, if its data centre receives a record with a future timestamp. This allows us to formulate and investigate policies that guarantee certain to-be-specified levels of consistency (as known from replication in relational databases).
For retrieval of a set of records a specified number of replicas has to be read, and for each record always the one with the latest timestamp will be returned. Depending on how many replicas are accessed the returned records may be (in the strict sense) outdated or not. Likewise, for the update of a set of records timestamps will be created, and a specified number of replicas of the records will be stored. Success of retrieval or update will be returned according to specified read- and write-policies.
In Section 2 we will first specify the behaviour of a concurrent system with shared data requiring that all agents interact with this subsystem for data retrieval and updates using appropriate SEND and RECEIVE actions. The memory management subsystem will be specified by a separate collection of agents. In Section 3 we investigate a refinement concerning policies how many replicas are to be read or updated, respectively. We show that some combinations of replication policies enable view compatibility, i.e. data consistency, which formalises the expectation above. In Section 4 we refine our specification taking the communication between data centres into account, and address the enforcement of the read and write policies. We obtain a complete, though not necessarily correct refinement, and as a consequence view compatibility cannot be guaranteed anymore. In fact, we even show that view compatibility implies view serialisability. That is, without exploiting the possibility of transactions—at least for single read or write requests—consistency cannot be preserved. Finally, we conclude with a brief summary and outlook.
2 Ground Model for Shared Memory Management with Replication
We assume some familiarity with Abstract State Machines (ASMs), which can be understood as a form of pseudo-code with well-founded semantics777Here we do not repeat the formal definition of the semantics of ASMs—detailed definitions can be found in the textbook [10, Sect.2.2/4].. The signature of an ASM is a finite set of function symbols , each associated with an arity . A state is a set of functions of arity over some fixed base set , given by interpretations of the corresponding function symbol . Each pair comprising a function symbol and arguments is called a location, and each pair of a location and a value is called an update. A set of updates is called an update set. The evaluation of terms is defined as usual by
We say that is the value at location in state . ASM rules are composed using
(with terms built over ),
IF THEN ELSE ,
- parallel composition.
FORALL WITH ,
- bounded parallel composition.
CHOOSE WITH IN , and
LET IN .
Each rule yields an update set in state . If this update set is consistent, i.e. it does not contain two updates with the same location and different values , then applying this update set defines a successor state .
Regarding the function symbols in we further distinguish static, dynamic and derived functions. Only dynamic function symbols can appear as the outermost symbol on the left hand side of an assignment.
2.1 Concurrent Communicating Abstract State Machines
A concurrent ASM (cASM) is defined as a family of pairs consisting of an agent and an ASM .
Let denote the signature of the ASM . Taking the union we distinguish between -states built over and local states for agent built over ; the latter ones are simply projections of the former ones on the subsignature.
A concurrent -run of a concurrent ASM is a sequence of -states, such that for each there is a finite set of agents such that results from simultaneously applying update sets for all agents that have been built by in some preceding state ( depending on ), i.e. and .
Dynamic functions in can further be private or shared. In the latter case they can be updated also by other agents and thus appear in at least one other signature .
In order to isolate agents responsible for a memory management subsystem we exploit communicating concurrent ASMs (ccASM) . In a ccASM the only shared function symbols take the form of mailboxes. Sending of a message from to means to update the out-mailbox of by inserting into it. This mailbox is a set-valued shared location with the restriction that only the sender can insert messages into it and only the environment—in this case understood as the message processing system—can read and delete them. The message processing system will move the message to the in-mailbox of the receiver . Receiving a message by means in particular that removes from its in-mailbox and performs some local operation on .
Therefore, in ccASMs the language of ASM rules above is enriched by the following constructs888As explained in , instead of describing the details of the local RECEIVE action of an agent we only use the corresponding RECEIVED predicate; it means that is in the mailbox of the agent who (or in general whose mailbox manager) RECEIVED it (when the message processing system has inserted into ’s in-mailbox and deleted from ’s out-mailbox).:
SEND(message, from:sender, to:receiver),
RECEIVE(message, from:sender, to:receiver),
RECEIVED(message, from:sender, to:receiver), and
CONSUME(message, from:sender, to:receiver).
Let us consider the situation, where all shared data is organised in relations with a unique primary key. We can model this by a set of function symbols , where each has a fixed arity , and a fixed “co-arity” , such that in each state we obtain partial functions , which are almost everywhere undefined.
When dealing with a memory management subsystem we have to specify a subsystem with separate agents that maintain the locations . A read access by an agent aims at receiving a subset of relation containing those records with key values satisfying a condition , i.e. the subsystem has to evaluate a term of the form . As is not in the signature , the agent , instead of using the term in a rule, must send a read-request and wait for a response, i.e. it executes
(the message just contains the name of the function symbol and the formula , and denotes a not further specified receiver, which for agent represents the memory management subsystem), and waits until, once RECEIVED(,from:home(),to:) becomes true, it can
the requested value from the memory management subsystem. We abstract from the details of the communication but assume the communication to be reliable (no message gets lost or damaged). Where clear from the context for reasons of succinctness we notationally omit the sender and receiver parameters in SEND and RECEIVE.
Naturally, the received answer corresponds to a unique previously sent read-request. As in the sequel we concentrate on the handling of single message by the memory management subsystem, the correct association of possibly several answers to several requests is of minor importance for us.
The answer in the message must be a relation of arity satisfying the key property above. The agent can store such an answer using a non-shared function or process it in any other way, e.g. aggregate the received values. This is part of the ASM rule in , which we do not consider any further.
In the SEND and RECEIVE rules we use a fixed agent home(), with which the agent communicates. It will be unknown to the agent , whether this agent home() processes the read-request or whether it communicates with other agents to produce the answer.
Analogously, for bulk write access an agent may want to execute the operation to update all records999Note that in this way we capture a deletion of a record in with key by having . Also insertions are subsumed by this operation: if holds, but , then a new record in with key is inserted. with a key defined in to the new values given by . While this would correspond to the ASM rule FORALL , the agent must send a write-request and wait for a response, i.e. it executes
(again, the message only contains the name of the function symbol and a relation ), and waits to receive an acknowledgement to the write-request101010Naturally, the remark above concerning the association of an answer to a unique previously sent request, extends analogously to write requests., i.e. to
We use the notation for the concurrent communicating ASM. Here for the sake of completeness we may think of a single memory agent —in particular, we have for all —that receives read and write requests and executes them in one step111111Note that the answer to a read request is a set, which may be empty.. Thus, the rule of looks as follows:
IF RECEIVED(read(),from:) THEN CONSUME(read(),from:) LET answer() IN SEND(answer(),to:) IF RECEIVED(write(),from:) THEN CONSUME(write(),from:) FORALL SEND(acknowledge(write,),to:)
2.2 Memory Organisation with Replication
In the previous subsection we assumed that logically (from the users’ point of view) all data is organised in relations with a unique primary key. However, as we want to emphasise replication, instead of a location there will always be several replicas, and at each replica we may have a different value. We use the notion cluster to refer to all (replicated) locations associated with a logical relation .
Each cluster is associated with several data centres, and each data centre comprises several nodes. The nodes are used for data storage, and data centres correspond to physical machines maintaining several such storage locations. So let denote the set of data centres, and let () be the sets of data centres for maintaining the relations , i.e. .
First let us assume that each relation is fragmented according to the values of a hash-key. That is, for each we can assume a static hash-function assigning a hash-key to each key value in , i.e. those keys at which in the memory management system possibly some value may be defined. We further assume a partition of the interval of hash-key values such that holds for all , so each range will again be an interval. These range intervals are used for the horizontal fragmentation into fragments of the to-be-represented function : .
All these fragments will be replicated and their elements associated with a value (where defined by the memory management system), using a fixed replication factor for each cluster. That is, each fragment will be replicated -times for each data centre . A set of all pairs with key and an associated value in the memory management system is called a replica of .
More precisely assume that each data centre consists of nodes, identified by and a number . Then we use a predicate to denote that the node with number in the data centre contains a replica of . To denote the values in replicas we use dynamic functions of arity and co-arity (functions we call again replicas). That is, instead of the logical function symbol we use function symbols with , and , and we request for all whenever holds and is defined. For the associated values we have , where is an added timestamp value, and values may differ from replica to replica, i.e. there can be different values with different timestamps in different replicas of .
Each data centre maintains a logical clock that is assumed to advance (without this being further specified), and evaluates to the current time at data centre . Timestamps must satisfy the following requirements:
Timestamps are totally ordered.
Timestamps set by different data centres are different from each other121212This requirement can be fulfilled by assuming that a timestamp created by data centre has the form with a positive integer and an offset , such that offsets of different data centres are different. Equivalently, one might use integer values for timestamps plus a total order on data centres that is used for comparisons in case two timestamps are otherwise equal..
Timestamps respect the inherent order of message passing, i.e. when data with a timestamp is created at data centre and sent to data centre , then at the time the message is received the clock at must show a time larger than .
2.3 Internal Request Handling for Replicated Memory
When dealing with a memory management subsystem with replication the request messages sent by agents remain the same, but the internal request handling by the memory management subsystem changes. This will define a refined concurrent communicating ASM .
Each agent possesses a private version of the shared location parameterised by itself131313An exact definition for this is given by the notion of ambient dependent function in .. Consider a read request received from agent by data centre —let be fixed for the rest of this section. Due to the fact that data is horizontally fragmented, we need to evaluate several requests concerning keys with , one request for each fragment index , and then build the union so that .
In order to evaluate several replicas of will have to be accessed. Here we will leave out any details on how these replicas will be selected and accessed—this will be handled later by means of refinement. We only request that the selection of replicas complies with a read-policy
. Such a policy will also be left abstract for the moment and defined later141414We also leave out the treatment of nodes that are not reachable. It can be tacitly assumed that if node at data centre cannot be reached, then all operations affecting data at this node will be logged and executed once the node becomes available again..
When reading actual data, i.e. evaluating for selected key values , we obtain different time-stamped values , out of which a value with the latest timestamp is selected and sent to as the up-to-date value of 151515When there is no record in relation with key , we would normally write in an ASM, but in the replication context it will be simpler to write instead, i.e. all non-existing data are considered to carry the smallest possible timestamp denoted by . Furthermore, if a record is deleted, we keep a deletion timestamp, so we may also find with . As we will see, such a deletion timestamp becomes obsolete, once the value has been assigned to all replicas, i.e. the assumption of deletion timestamps does not disable physical removal of data from the database.. The requirement that timestamps set by different data centres differ implies that for given the value with the latest timestamp is unique. All records obtained this way will be returned as the result of the read request to the issuing agent . Thus, we obtain the following ASM rule AnswerReadReq to-be-executed by any data centre upon receipt of a read request from an agent :
AnswerReadReq = IF RECEIVED() THEN CONSUME() FORALL CHOOSE WITH LET IN LET IN LET IN SEND()
Note that the unique value with may be undef and that the returned may be the empty set.
For a write request write() sent by agent to data centre we proceed analogously. In all replicas of selected by a write-policy the records with a key value in will be updated to the new value—this may be undef to capture deletion—provided by , and a timestamp given by the current time . However, the update will not be executed, if the timestamp of the existing record is already newer. In addition, clocks that “are too late” will be adjusted, i.e. if the new timestamp received from the managing data centre is larger than the timestamp at data centre , the clock at is set to the received timestamp. Thus, we obtain the following ASM rule PerformWriteReq161616Again, we dispense with the handling of write-requests at nodes that are not available. For this we can assume an exception handling procedure that logs requests and executes them in the order of logging, once a node has become alive again. This could give rise to a refinement, which we omit here. to-be-executed by any data centre upon receipt of an update request from an agent :
PerformWriteReq = IF RECEIVED() THEN CONSUME() FORALL CHOOSE WITH LET IN FORALL FORALL WITH IF THEN IF THEN adjust_clock SEND(acknowledge()
The clock adjustment is necessary to ensure that timestamps respect the inherent order of message passing as requested above. Write requests with old timestamps may be lost in case a value with a newer timestamp already exists. Then depending on the read-policy a lost update on a single replica may be enough for the value never to appear in an answer to a read-request. In the following we use the notions of complete and correct refinement171717Note that the notion of refinement for ASMs is more general than data refinement as discussed in [10, p.113]. In particular, correct refinement does not imply the preservation of invariants. as defined in [10, pp.111ff.].
The concurrent communicating ASM is a complete refinement of the concurrent communicating ASM .
The only differences between the two communicating concurrent ASMs are the following:
Instead of having for all in the abstract specification, differs in the refinement. Nonetheless, in both cases the handling of a read or write request is done in a single step.
The rule fragment in the abstract specification
LET answer() IN SEND(answer(),to:)
dealing with a read request corresponds to a rule fragment
FORALL CHOOSE WITH LET IN LET IN LET IN SEND()
in the rule AnswerReadReq in the refinement.
The rule fragment in the abstract specification
dealing with a write request corresponds to a rule fragment
FORALL CHOOSE WITH LET IN FORALL FORALL WITH IF THEN IF THEN adjust_clock SEND(acknowledge()
in the rule PerformWriteReq in the refinement.
Thus, each run of the abstract communicating concurrent ASM defines in a natural way a run of the refined communicating concurrent ASM181818Actually, in this case the refinement is a (1,1)-refinement.. ∎
Note that without further knowledge about the read- and write-policies it is not possible to prove that the refinement is also correct.
3 Refinement Using Replication Policies
We define view compatibility, a notion for consistency that formalises the intuitive expectation of the agents sending requests that the answers in case of replication remain the same as without, because replication is merely an internal mechanism of the memory management subsystem to increase availability, which is completely hidden from the requesting agents. We then refine the ASMs for internal request handling by concrete read- and write-policies, and show that for particular combinations of read- and write-policies our abstract specification guarantees view compatibility, which further implies that the refinement of by is correct.
3.1 View Compatibility
Informally, view compatibility is to ensure that the system behaves in a way that whenever an agent sends a read- or write-request the result is the same as if the read or write had been executed in a state without replication or timestamps and without any internal processing of the request191919Note that view compatibility is a rather weak consistency requirement, as it only ensures that despite replication up-to-date values are read. However, as we will show later in Section 4, even this weak concistency requirement requires some form of transaction management.. However, it may be possible that parallel requests are evaluated in different states.
For a formal definition of this notion we have to relate runs of the concurrent communicating ASM with the memory management subsystem with runs, where in each state a virtual location has only one well-defined value instead of computing such a value from the different replicas that may even originate from different states. For this we first introduce the technical notion of a flattening: we simply reduce the multiple values associated with replicas of a location to a single value.
Formally, consider runs of the concurrent ASM , where the ASMs sends read and write requests that are processed by the data centre agents , which return responses.
If is a run of , then we obtain a flattening , … by replacing in each state all locations by a single location and letting the value associated with in the considered state be one of the values in the set
in the considered state.
Obviously, a flattening is a sequence of states of the concurrent ASM , but in most cases it will not be a run. So the question is, under which conditions we can obtain a run. As we stated above that the system shall behave as if there is no replication, we have to ensure that for each agent the following property holds: If the agent sends a write-request that would update the value of a location to , the effect would be the same in the flattening. Analogously, if the agent sends a read-request for location , the answer it receives would be also the same as if a read is executed in the flattening.
In order to formalise this property we define agent views, which emphasise only the moves of a single agent , while merging all other moves into activities of the environment as defined in .
Let us first consider the agent view of a concurrent run of our communicating ASMs . Let be an arbitrary agent. Its view of the run is the subsequence of states in which makes a move (restricted to the signature of ). Thus is ’s initial state, the state (where depends on ) in which performs its first step in the given run. Given any state , its successor state in the -view sequence depends on the move performs in .
If in performs a Send step—a write- or read-request to —it contributes to the next state by an update set which includes an update of its out-mailbox, which in turn is assumed to eventually yield an update of the mailbox of . But is not yet the next -view state, in which will perform its next move. This move is determined by the following assumption:
Atomic Request/Reply Assumption for agent/db runs: If in a run an agent performs a Send step to the memory management system, then its next step in the run is the corresponding Receive step, which can be performed once the answer to the query sent by the memory management system has been Received.
By this assumption the next -view state is determined by (what appears to as) an environment action which enables the Receive step by inserting the reply message into ’s mailbox and thereby making the predicate true in for some .
If in performs a Receive or an internal step, then besides the mailbox update to Consume the received message it yields only updates to non-shared locations so that its next -view state is the result of applying these updates together with updates other agents bring in to form .
Note that by the Atomic Request/Reply Assumption any agent can make finitely many internal steps after and only after each Receive step. For the ease of exposition but without loss of generality we synchronize internal steps of an agent with the global steps other agents perform during such an internal computation segment so that the result of internal moves becomes visible in the global run view.
We now define a notion of flat agent view of a run of the concurrent agent ASM , including the memory management subsystem.
Take an arbitrary subsequence of an arbitrary flattening (restricted to the signature of the agent ) of . Then is called a flat view of agent of the run if the following conditions hold:
Whenever performs a request in state there is some such that . If the corresponding reply is received in state for some (so that makes a Receive move in state ), then . Furthermore there exists some with such that the following holds:
If the request is a write-request, then for each location with value in this request holds, provided there exists an agent reading202020Otherwise the update at location will be lost. For instance, this may happen in case of outdated timestamps. the value .
If the request is a read-request, then for each location with value in the answer holds.
Whenever performs a Receive or an internal move in state there is some such that and
We say that is view compatible with the concurrent ASM iff for each run of there exists a subsequence of a flattening that is a run of such that for each agent the agent -view of coincides with a flat view of by .
Note that in this definition of view compatibility we relate agent views with flat agent views, and in both these views we consider the restriction to the signature of the agent . Technically, the mailboxes associated with belong to this signature, so every SEND and RECEIVE state (for ) and its flattening appears in the views. We can understand the agent view and the flat agent view in such a way, that the environment reads the request (i.e. the message in the out-mailbox) in state , evaluates it in state , and places the answer into the in-mailbox of in state (for as requested in the definition). This can also formally be considered as a move by a single agent representing the memory management subsystem, which executes the ASM rules that are associated with each request.
The notion of view compatibility is closely related to sequential consistency as defined by Lamport. In Lamport’s work an execution is a set of sequences of operations, one sequence per process. An operation is a pair (operation, value), where the operation is either read or write and value is the value read or written. The execution is sequentially consistent iff there exists an interleaving of the set of sequences into a single sequence of operations that is a legal execution of a single-processor system, meaning that each value read is the most recently written value. (One also needs to specify what value a read that precedes any write can obtain.) Our definition of view compatibility generalises this notion, as we always consider bulk requests and also permit parallel operations.
3.2 Specification of Replication Policies
In the specification of ASM rules handling read and write requests by a fixed data centre we used sets with ( are given by the request and the fragmentation) as well as an abstract predicate . It is allowed to specify different policies for read and write, but only one policy is used for all read requests, and only one for all write requests.
Let us now define different such policies and refine the ASMs for the data centre agents by means of these definitions. Basically, we distinguish policies that are global and those that are local, the latter ones requesting that only replicas in the handling data centre are considered for the set . Thus we can use the following definition:
In addition the different policies differ only in the number of replicas that are to be accessed. Major global policies are All, One, Two, Three, Quorum, and Each_Quorum, while the corresponding local policies are Local_One and Local_Quorum.
As the name indicates, the predicate can be defined by , i.e. all replicas are to be accessed.
- One, Two, Three.
Here, at least one, two or three replicas are to be accessed, which defines , and by , , and , respectively.
Analogously, is defined by the conjunction .
For a value with is defined by . By default, the value is used, i.e. a majority of replicas has to be accessed. Note that All could be replaced by Quorum().
Analogously, can be defined by .
For this we have to consider each data centre separately, for which we need and . Then the definition of becomes .
3.3 Consistency Analysis
In the following we will analyse in more detail the effects of the replication policies. For this we need appropriate combinations of read- and write-policies:
If the write policy is ALL, then the combination with any read policy is appropriate.
If the read policy is ALL, then the combination with any write policy is appropriate.
If the write-policy is Quorum or Each_Quorum and the read-policy is Quorum or Each_Quorum with , then the combination is appropriate.
Let be the concurrent communicating ASM with a memory management subsystem using data centres as specified in the previous section. If the combination of the read and write policies is appropriate, then the system is view compatible with the concurrent ASM .
We exploit that in our abstract specification of the handling of write- and read-requests all selected replicas are written and read in parallel. Thus, if the write-policy is All, then in each state always the same value is stored in all replicas, so the proposition is obvious for the policy All.
If the write-policy is Quorum or Each_Quorum, then for each location the multiplicity of replica considered to determine the value with the largest timestamp is at least with being the total number of copies. For this it is essential that updates with a smaller timestamp are rejected, if a value with a larger timestamp already exists.
Consequently, each read access with one of the policies Quorum, Each_Quorum (with ) or All will read at least once this value and return it, as the value with the largest timestamp will be used for the result. That is, in every state only the value with the largest timestamp for each location uniquely determines the run, which defines the equivalent concurrent run.∎
If is view compatible, then it is also a correct refinement of .
Let be a run of . According to the definition of view compatibility there exists a flattened subrun that is also a concurrent run of such that for each agent the projections of and coincide.∎
A stronger notion of consistency would be global consistency, for which we even require that any run of the ASM with the memory management behaves in the same way as a concurrent ASM, i.e. different from the view compatibility we even require that parallel read- and write requests can be seen as referring to the same state. Formally, this requires to strengthen the requirements for a flat view such that for parallel request the index (with ) is always the same. Then the specification must be refined to handle also all parallel requests in parallel by the data cente agent.
4 Refinement with Internal Communication
We will now address a refinement of our specification of the memory management subsystem. So far in the home data centre agent associated with an agent manages in one step the retrieval of data from or update of data of sufficiently many replicas as specified by the read- and write-policies, respectively. In doing so we abstracted from any internal communication between data centres.
However, in reality data centres refer to different physical machines, so the gist of the refinement is to treat the handling of a request as a combination of direct access to local nodes, remote access via messages to the other relevant data centres, and collection and processing of return messages until the requirements for the read- or write-policy are fulfilled. That is, the validation of the policy accompanies the preparation of a response message and is no longer under control of the home agent.
We first show again that the refinement is a complete ASM refinement. Then we investigate again view compatibility and show that it is not preserved by the refinement. In fact, view compatibility is linked to the refinement being also correct, i.e. for each run of the concrete concurrent system we find a corresponding run of the abstract concurrent system. In general, however, this is not the case. We can even show that view compatibility implies view serialisability, which means that consistency (if desired) can only be guaranteed, if transactions (at least for single requests) are used. On the other hand, transactions together with appropriate read- and write-policies trivially imply view compatibility.
4.1 Request Handling with Communicating Data Centres
In Section 2 we specified how a data centre (treated as an agent) handles a request received from an agent with . We distinguished read requests read() subject to a read-policy and write requests subject to a write-policy. Now, we first specify an abstract rule which manages external requests, i.e. coming from an agent and received by any data centre , where request is one of these read or write requests. Essentially an external request is forwarded as internal request to all other data centres , where it is handled and answered locally (see the definition of HandleLocally below), whereas collecting (in ) and sending the overall answer to the external agent is delegated to a new agent . SELF denotes the data centre agent which executes the rule.
DelegateExternalReq = IF RECEIVED() THEN LET IN LET = new(Agent) IN Initialize() HandleLocally() ForwardToOthers(