Specifying Transaction Control to Serialize Concurrent Program Executions

06/06/2017 ∙ by Egon Börger, et al. ∙ Software Competence Center Hagenberg 0

We define a programming language independent transaction controller and an operator which when applied to concurrent programs with shared locations turns their behavior with respect to some abstract termination criterion into a transactional behavior. We prove the correctness property that concurrent runs under the transaction controller are serialisable. We specify the transaction controller TaCtl and the operator TA in terms of Abstract State Machines. This makes TaCtl applicable to a wide range of programs and in particular provides the possibility to use it as a plug-in when specifying concurrent system components in terms of Abstract State Machines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper is about the use of transactions as a common means to control concurrent access of programs to shared locations and to avoid that values stored at these locations are changed almost randomly. A transaction controller interacts with concurrently running programs (read: sequential components of an asynchronous system) to control whether access to a shared location can be granted or not, thus ensuring a certain form of consistency for these locations. A commonly accepted consistency criterion is that the joint behavior of all transactions (read: programs running under transactional control) with respect to the shared locations is equivalent to a serial execution of those programs. Serialisability guarantees that each transaction can be specified independently from the transaction controller, as if it had exclusive access to the shared locations.

It is expensive and cumbersome to specify transactional behavior and prove its correctness again and again for components of the great number of concurrent systems. Our goal is to define once and for all an abstract (i.e. programming language independent) transaction controller TaCtl which can simply be “plugged in” to turn the behavior of concurrent programs (read: components  of any given asynchronous system ) into a transactional one. This involves to also define an operator  which forces the programs  to listen to the controller TaCtl when trying to access shared locations.

For the sake of generality we define the operator and the controller in terms of Abstract State Machines (ASMs) which can be read and understood as pseudo-code so that TaCtl and the operator can be applied to code written in any programming language (to be precise: whose programs come with a notion of single step, the level where our controller imposes shared memory access constraints to guarantee transactional code behavior). On the other side, the precise semantics underlying ASMs (for which we refer the reader to [5]) allows us to mathematically prove the correctness of our controller and operator.

We concentrate here on transaction controllers that employ locking strategies such as the common two-phase locking protocol (2PL). That is, each transaction first has to acquire a (read- or write-) lock for a shared location, before the access is granted. Locks are released after the transaction has successfully committed and no more access to the shared locations is necessary. There are of course other approaches to transaction handling, see e.g. [6, 14, 15, 17] and the extensive literature there covering classical transaction control for flat transactions, timestamp-based, optimistic and hybrid transaction control protocols, as well as non-flat transaction models such as sagas and multi-level transactions.

We define TaCtl and the operator in Sect. 2 and the TaCtl components in Sect. 3. In Sect. 4 we prove the correctness of these definitions.

2 The Transaction Operator )

As explained above, a transaction controller performs the lock handling, the deadlock detection and handling, the recovery mechanism (for partial recovery) and the commit of single machines. Thus we define it as consisting of four components specified in Sect. 3.

  • LockHandler

    DeadlockHandler

    Recovery

    Commit

The operator  transforms the components  of any concurrent system (asynchronous ASM) into components of a concurrent system where each runs as transaction under the control of TaCtl:

TaCtl keeps a dynamic set of those machines  whose runs it currently has to supervise to perform in a transactional manner until  has its transactional behavior (so that it can Commit it).111In this paper we deliberately keep the termination criterion abstract so that it can be refined in different ways for different transaction instances. To turn the behavior of a machine  into a transactional one, first of all  has to register itself with the controller TaCtl, read: to be inserted into the set of currently to be handled ions. To Undo as part of a recovery some steps  made already during the given transactional run segment of , a last-in first-out queue is needed which keeps track of the states the transactional run goes through; when  enters the set the has to be initialized (to the empty queue).

The crucial transactional feature is that each non private (i.e. shared or monitored or output) location  a machine  needs to read or write for performing a step has to be for this purpose;  tries to obtain such locks by calling the LockHandler. In case no are needed by  in its or the needed can be by the LockHandler, performs its next step; in addition, for a possible future recovery, the machine has to Record in its the current values of those locations which are (possibly over-) written by this -step together with the obtained . Then  continues its transactional behavior until it is . In case the needed are , namely because another machine  in for some needed  has or (in case  wants a W-(rite)Lock) has , has to for ; in fact it continues its transactional behavior by calling again the LockHandler for the needed —until the needed locked locations are unlocked when ’s transactional behavior is Commited, whereafter a new request for these locks this time may be to .222As suggested by a reviewer, a refinement (in fact a desirable optimization) consists in replacing such a waiting cycle by suspending  until the needed locks are released. Such a refinement can be obtained in various ways, a simple one consisting in letting  simply stay in until the and refining LockHandler to only choose pairs where it can and doing nothing otherwise (i.e. defining ). See Sect. 3.

As a consequence deadlocks may occur, namely when a cycle occurs in the transitive closure of the relation. To resolve such deadlocks the DeadlockHandler component of TaCtl chooses some machines as s for a recovery.333To simplify the serializability proof in Sect.3 and without loss of generality we define a reaction of machines  to their victimization only when they are in TA- (not in ). This is to guarantee that no locks are to a machine as long as it does . After a victimized machine  is by the Recovery component of TaCtl, so that  can exit its state, it continues its transactional behavior.

This explains the following definition of as a control state ASM, i.e. an ASM with a top level Finite State Machine control structure. We formulate it by the flowchart diagram of Fig. 1, which has a precise control state ASM semantics (see the definition in [5, Ch.2.2.6]). The components for the recovery feature are highlighted in the flowchart by a colouring that differs from that of the other components. The macros which appear in Fig. 1 and the components of TaCtl are defined below.

Figure 1: TA(M,C)

The predicate holds if in the current state of  at least one of two cases happens:444See [5, Ch.2.2.3] for the classification of locations and functions. either to perform its step in this state reads some shared or monitored location which is not yet or  writes some shared or output location which is not yet for writing. A location can be for reading () or for writing (). Formally:

The ues are the -values (retrieved by the -function) of those shared or output locations which are written by  in its . To Record the set of these values together with the obtained means to append the pair of these two sets to the queue of  from where upon recovery the values and the locks can be retrieved.

To CallLockHandler for the requested by  in its means to into the LockHandler’s set of to be handled s. Similarly we let CallCommit(M) stand for insertion of  into a set of the Commit component.

3 The Transaction Controller Components

A CallCommit(M) by machine  enables the Commit component. Using the operator we leave the order in which the s are handled refinable by different instantiations of TaCtl.

Commiting  means to Unlock all locations  that are . Note that each lock obtained by  remains with  until the end of ’s transactional behavior. Since  performs a CallCommit(M) when it has its transactional computation, nothing more has to be done to Commit besides deleting  from the sets of s and still to be handled ions.888We omit clearing the queue since it is initialized when  is inserted into .

Note that the locations and are shared by the Commit, LockHandler and Recovery components, but these components never have the same  simultaneously in their request resp. set since when machine  has performed a CallCommit(M), it has its transactional computation and does not participate any more in any or ization.

As for Commit also for the LockHandler we use the operator to leave the order in which the s are handled refinable by different instantiations of TaCtl.

The strategy we adopt for lock handling is to refuse all locks for locations requested by  if at least one of the following two cases happens:

  • some of the requested locations is by another transactional machine ,

  • some of the requested locations is a ation that is by another transactional machine .

This definition implies that multiple transactions may simultaneoulsy have a on some location. It is specified below by the predicate .

To RefuseRequestedLocks it suffices to set the communication interface of ; this makes  for each location  that is and for each ation that is by some other transactional component machine .

A originates if two machines are in a cycle, otherwise stated if for some (not yet ized) machine  the pair is in the transitive (not reflexive) closure of . In this case the DeadlockHandler selects for recovery a (typically minimal) subset of transactions —they are ized to , in which mode (control state) they are backtracked until they become . The selection criteria are intrinsically specific for particular transaction controllers, driving a usually rather complex selection algorithm in terms of number of conflict partners, priorities, waiting time, etc. In this paper we leave their specification for TaCtl abstract (read: refinable in different directions) by using the operator.

Also for the Recovery component we use the operator to leave the order in which the s are chosen for recovery refinable by different instantiations of TaCtl. To be a machine  is backtracked by steps until is not any more, in which case it is deleted from the set of s, so that be definition it is . This happens at the latest when has become empty.

Note that in our description of the DeadlockHandler and the (partial) Recovery we deliberately left the strategy for victim seclection and Undo abstract leaving fairness considerations to be discussed elsewhere. It is clear that if always the same victim is selected for partial recovery, the same deadlocks may be created again and again. However, it is well known that fairness can be achieved by choosing an appropriate victim selection strategy.

4 Correctness Theorem

In this section we show the desired correctness property: if all monitored or shared locations of any are output or controlled locations of some other and all output locations of any are monitored or shared locations of some other (closed system assumption)999This assumption means that the environment is assumed to be one of the component machines., each run of is equivalent to a serialization of the terminating -runs, namely the -run followed by the -run etc., where is the -th machine of which performs a commit in the run. To simplify the exposition (i.e. the formulation of statement and proof of the theorem) we only consider machine steps which take place under the transaction control, in other words we abstract from any step  makes before being Inserted into or after being Deleted from the set of machines which currently run under the control of TaCtl.

First of all we have to make precise what a serial multi-agent ASM run is and what equivalence of runs means in the general multi-agent ASM framework.

4.0.1 Definition of run equivalence.

Let be a (finite or infinite) run of the system . In general we may assume that TaCtl runs forever, whereas each machine running as transaction will be terminated at some time – at least after commit will only change values of non-shared and non-output locations101010It is possible that one ASM enters several times as a transaction controlled by TaCtl. However, in this case each of these registrations will be counted as a separate transaction, i.e. as different ASMs in .. For let denote the unique, consistent update set defining the transition from to . By definition of the update set is the union of the update sets of the agents executing resp. TaCtl:

contains the updates defined by the ASM in state 111111We use the shorthand notation to denote ; in other words we speak about steps and updates of  also when they really are done by . Mainly this is about transitions between the control states, namely TA-, , (see Fig.1), which are performed during the run of  under the control of the transaction controller TaCtl. When we want to name an original update of  (not one of the updates of or of the Record component) we call it a proper -update. and contains the updates by the transaction controller in this state. The sequence of update sets , , , …will be called the schedule of (for the given transactional run).

To generalise for transactional ASM runs the equivalence of transaction schedules known from database systems [6, p.621ff.] we now define two cleansing operations for ASM schedules. By the first one (i) we eliminate all (in particular unsuccessful-lock-request) computation segments which are without proper -updates; by the second one (ii) we eliminate all -steps which are related to a later step by the Recovery component:

  1. Delete from the schedule of each where one of the following two properties holds:

    • ( contributes no update to ),

    • belongs to a step of an -computation segment where  in its TA- does and in its next step moves from control-state back to control state TA, because the LockHandler refused new locks by .121212Note that by eliminating this step also the corresponding LockHandler step disappears in the run.

    In such computation steps  makes no proper update.

  2. Repeat choosing from the schedule of a pair with later () which belong to the first resp. second of two consecutive -Recovery steps defined as follows:

    • a (say -RecoveryEntry) step whereby  in state moves from control-state TA- to , because it became a ,

    • the next -step (say -RecoveryExit) whereby  in state moves back to control state TA- because it has been .

    In these two -Recovery steps  makes no proper update. Delete:

    1. and ,

    2. the update from the corresponding () which in state triggered the -RecoveryEntry,

    3. -updates in any update set between the considered -RecoveryEntry and -RecoveryExit step (),

    4. each belonging to the -computation segment from TA- back to TA- which contains the proper -step in that is UNDOne in by the considered step; besides control state and Record updates these contain updates with where the corresponding Undo updates are ,

    5. the -updates in corresponding to ’s CallLockHandler step (if any: in case are needed for the proper -step in ) in state ().

The sequence with resulting from the application of the two cleansing operations as long as possible – note that confluence is obvious, so the sequence is uniquely defined – will be called the cleansed schedule of (for the given run).

Before defining the equivalence of transactional ASM runs we remark that has indeed several runs, even for the same initial state . This is due to the fact that a lot of non-determinism is involved in the definition of this ASM. First, the submachines of TaCtl are non-deterministic:

  • In case several machines request conflicting locks at the same time, the LockHandler can only grant the requested locks for one of these machines.

  • Commit requests are executed in random order by the Commit submachine.

  • The submachine DeadlockHandler chooses a set of victims, and this selection has been deliberately left abstract.

  • The Recovery submachine chooses in each step a victim , for which the last step will be undone by restoring previous values at updated locations and releasing corresponding locks.

Second, the specification of leaves deliberately open, when a machine will be started, i.e., register as a transaction in to be controlled by TaCtl. This is in line with the common view that transactions can register at any time to the transaction controller TaCtl and will remain under its control until they commit.

Definition 1

Two runs and of are equivalent iff for each the cleansed schedules and for the two runs are the same and the read locations and the values read by  in and are the same.

That is, we consider runs to be equivalent, if all transactions read the same locations and see there the same values and perform the same updates in the same order disregarding waiting times and updates that are undone.

4.0.2 Definition of serializability.

Next we have to clarify our generalised notion of a serial run, for which we concentrate on committed transactions – transactions that have not yet committed can still undo their updates, so they must be left out of consideration131313Alternatively, we could concentrate on complete, infinite runs, in which only committed transactions occur, as eventually every transaction will commit – provided that fairness can be achieved.. We need a definition of the read- and write-locations of in a state , i.e. and as used in the definition of .

The definition of depends on the locking level, whether locks are provided for variables, pages, blocks, etc. To provide a definite definition, in this paper we give the definition at the level of abstraction of the locations of the underlying class of component machines (ASMs) . Refining this definition (and that of ) appropriately for other locking levels does not innvalidate the main result of this paper.

We define , where is the defining rule of the ASM , and analogously . Then we use structural induction according to the definition of ASM rules in  [5, Table 2.2]. As an auxiliary concept we need to define inductively the read and write locations of terms and formulae. The definitions use an interpretation  of free variables which we suppress notationally (unless otherwise stated) and assume to be given with (as environment of) the state . This allows us to write , instead of , respectively.

4.0.3 Read/Write Locations of Terms and Formulae.

For state  let  be the given interpretation of the variables which may occur freely (in given terms or formulae). We write for the evaluation of  (a term or a formula) in state  (under the given interpretation  of free variables).

Note that logical variables are not locations: they cannot be written and their values are not stored in a location but in the given interpretation  from where they can be retrieved.

We define for every formula because formulae are not locations one could write into. for atomic formulae has to be defined as for terms with playing the same role as a function symbol . For propositional formulae one reads the locations of their subformulae. In the inductive step for quantified formulae denotes the superuniverse of  minus the Reserve set [5, Ch.2.4.4] and the extension (or modification) of  where  is interpreted by a domain element .

Note that the values of the logical variables are not read from a location but from the modified state environment function .

4.0.4 Read/Write Locations of ASM Rules.

In the following cases the same scheme applies to read and write locations:141414In   denotes the update set produced by rule  in state  under .