1 Introduction
Concurrent separation logic (CSL) is an extension of Reynold’s separation logic [12] formulated by O’Hearn [10] to establish the correctness of concurrent imperative programs with shared memory and locks. This specification logic enables one to establish the good behavior of these programs in an elegant and modular way, thanks to the frame rule of separation logic. A sequent of concurrent separation logic
consists of a Hoare triple together with a context which declares a number of resource variables (or mutexes) together with the CSL formula which they satisfy as invariant. The validity of the program logic relies on a soundness theorem, which states that the existence of a derivation tree in concurrent separation logic
ensures (1) that the concurrent program will not produce any race condition at execution time, and (2) that the program will transform every initial state satisfying into a state satisfying when it terminates, as long as each resource allocated in memory satisfies the CSL invariant . The soundness of the logic was established by Brookes in his seminal papers on the trace semantics of concurrent separation logic [5, 6]. His soundness proof was the object of great attention in the community, and it was revisited in a number of different ways, either semantic [13], syntactic [2] or axiomatic [7] and formalised in proof assistants. One main technical challenge in all these proofs of soundness is to establish the validity of the concurrent rule:
Concurrent Rule
and of the frame rule:
Frame Rule
In this paper, we establish the validity of these two rules (and of CSL at large) based on a new approach inspired by game semantics, which relies on the observation that the derivation tree of CSL defines a winning strategy in a specification game. As we will see, the specification game itself is derived from the execution of the code and its interaction with the environment (called the frame) using locks on the shared memory. The specification game expresses the usual relyandguarantee conditions as winning conditions in an interactive game played between Eve (for the code) and Adam (for the frame).
In the semantic proofs of soundness, two notions of “state” are usually considered, besides the basic notion memory state which describes the state of the variables and of the heap: (1) the machine states which are used to describe the execution of the code, and in particular include information about the status of the locks, and (2) the logical states
which include permissions and other information invisible at the execution level, but necessary to specify the states in the logic. In particular, the tensor product
of separation logic requires information on the permissions, and it is thus defined on logical states, not on machine states. The starting point of the paper is the observation that there exists a third notion of state, which we call separated state, implicitly at work in all the semantic proofs of soundness. A separated state describes which part of the global (logical) state of the machine is handled by each component interacting in the course of the execution. It is defined as a triple consisting of
the logical state of the code,

the logical state of the frame,

a function which tells for every resource variable whether it is locked and owned by the code, , locked and owned by the frame, , or available with logical state .
This leads us to a “span”
(1) 
where the two notions of machine state and of logical state are “refined” by the notion of separated state, which conveys information about locks (as machine states) and about permissions (as logical states). Namely, every separated state
refines the logical state defined by the separation tensor product
(2) 
where denotes the set of resources available in , in the sense that . Similarly, every separated state refines a machine state defined as the memory state underlying the logical state (2) just constructed, plus the set of locked resources , see §8 for details. In the same way as the notion of logical state is necessary to define the tensor product of separation logic, and thus to specify the states, the shift from machine states to separated states is necessary to specify the code, and the way it interacts with its environment and with its resources. Our point here is that the formulas and of separation logic in a Hoare triple do not specify the logical state of the machine itself, but the fragment of this logical state owned by the code at the beginning and at the end of the execution. The notion of separated state is thus at the very heart of the very concept of Hoare triple in separation logic.
We follow the following track in the paper. After discussing the related work, we formulate the two notions of machine states and of machine instructions in §3. This enables us to define the notion of execution traces on machine states in §4 and a number of algebraic operations on them. The trace semantics of concurrent programs, and their interpretation as transition systems, is then formulated in §5 and §6. Once the notion of machine state has been used to describe the trace semantics of the language, we move to the logical side of the span, and formulate the notions of logical state in §7 and the notion of separated state in §8. In §10, we explain how to associate to every execution trace a specification game played on the paths of the graph of separated states, which is defined in §9. The moves of those games express the ownership discipline enforced by separation logic, and in particular the discipline associated to the locks in concurrent separation logic. Finally, we show in §11 that CSL is sound by proving that every derivation tree of the logic defines a strategy, which lifts each step of the Code of an execution trace into the graph of separated states.
2 Related Work
Several proofs of soundness have already been given for concurrent separation logic. The first proof of correctness was designed by Brookes in [5, 6] using semantic ideas. In his proof, every program is interpreted as a set of “action traces”, defined as finite or infinite sequences of “actions” that look like:
, , , ….
An interesting feature of the model is that these action traces do not mention (at least explicitly) the machine states produced by the Code at execution time. The environment is taken into account through the existence of non sequentially consistent traces such as
,
in the model. The idea is that the Environment presumably changed the value of the variable between the two actions of the Code. Separation in the logic enables one to decompose actions traces into local computations, in order to reflect the program’s subjective view of the execution.
Vafeiadis gave another proof of correctness [13] based on more directly operational intuitions. In his proof, the Code is interpreted as a transition system whose vertices are pairs consisting of the Code and of the state of the memory, and where edges are execution steps. The core of the soundness proof is that each step of the execution preserves a decomposition of the heap into three parts, which correspond respectively to the Code, the resources, and the Frame. The proof is done by induction on the derivation tree establishing the triple in concurrent separation logic. The idea of using separated states thus comes from Vafeiadis’ proof, which is the closest to ours. One difference, however, besides the gametheoretic point of view we develop, is that we have a more intensional description of separated states, provided by the function which tracks the states of each of the available locks.
In contrast to the semantic proofs mentioned above, Balabonski, Pottier and Protzenko [2] developed a purely syntactic proof of correctness for Mezzo, a functional language equipped with a typeandcapability system based on concurrent separation logic. The soundness of the logic follows in their approach from a progress and a preservation theorem on the type system of Mezzo.
Our focus in this work is to develop a gametheoretic approach to concurrent separation logic. For that reason, we prefer to keep the logic as well as the concurrent language fairly simple and concrete. In particular, we do not consider more recent, sophisticated and axiomatic versions of the logic, like Iris [8, 9].
3 Machine states and machine instructions
The purpose of this section is to introduce the notions of machine state and of machine instruction which will be used all along the paper. We suppose given countable sets of variable names, of values, of memory locations, and of resources. In practice, and .
Definition 1 (Memory state)
A memory state is a pair of partial functions with finite domains and called the stack and the heap of the memory state . The set of memory states is denoted . The domains of the partial function and of are noted and respectively, and we write for their disjoint union.
Definition 2 (Machine state)
A machine state is a pair consisting of a memory state and of a subset of resources , called the lock state, which describes the subset of locked resources in . The set of machine states is denoted .
A machine step is defined as a labelled transition between machine states, which can be of two different kinds:
depending on whether the instruction has been executed successfully (on the left) or it has produced a runtime error (on the right). We write when we do not want to specify whether the instruction has produced a runtime error. The machine instructions which label the machine steps are defined below:
where is a variable, is a resource variable, and are arithmetic expressions with variables. Typically, the instruction assigns to the variable the value of the expression in the memory state , the instruction locks the resource variable when it is available, while the instruction releases it when it is locked, as described below:
Thanks to the inclusion , an expression may also denote a location. In that case, refers to the value of the location in memory. The instruction (for nooperation) does not alter the logical state, while allocates (in a nondeterministic way) some memory space on the heap, initializes it with the value of the expression , and returns the address of the location to the variable , while deallocates the location with address .
It will be convenient in the sequel to write for the set of locks which are taken by an instruction , that is, if and otherwise; is the set of locks which are released by the instruction , that is, if and otherwise.
4 Execution traces
Now that the notion of machine state has been introduced, the next step towards the interpretation of programs is to define the notion of execution trace, with two kinds of transitions: the even transitions “played” by the Code, and the odd transitions “played” by the Environment.
Definition 3 (Traces)
A trace is a sequence of machine states
whose even transitions
are labelled by an instruction such that and whose last transition is played by the environment. The set of traces is denoted by .
We write and for the initial and the final states of a trace , respectively. The length is defined as the number of Code transitions in the trace, and
denotes the th even transition of the trace , for . Observe that a trace always starts and stops by an Environment transition, and that its number of transitions is equal to . We point out the following fact which we will often use in our proofs and constructions:
Proposition 1
A trace is characterized by its initial state and by its final state , together with the sequence of Code transitions for .
We introduce now a number of important algebraic constructions on execution traces, whose purpose is to reflect at the level of traces the sequential and parallel composition of programs.
Definition 4 (Sequential composition)
Given two traces such that , one defines as the trace of length with initial state and final state , and with even transitions defined as
Definition 5 (Restriction)
Let denote the set of traces of length . Every increasing function induces a restriction function
which transports a trace of length to a coinitial and cofinal trace of length
defined by the instructions for .
Definition 6 (Shuffle)
A shuffle of two natural numbers and is a monotone bijection The set of shuffles of and is denoted .
Every shuffle induces a pair of increasing functions
defined by restricting to and to , respectively. From this follows immediately that
Proposition 2
Every shuffle induces a function
which transports a trace of length to the pair
Definition 7
The parallel composition is the set of traces such that for some shuffle .
Note that every trace in satisfies and more importantly, that the parallel composition of two traces and is empty whenever the two traces and are not coinitial and cofinal.
The purpose of our last construction is to “hide” the name of a resource variable in an execution trace.
Definition 8
The function transforms every trace by applying the function
to each machine state of the original trace, and the function
to the instructions of the trace.
5 Transition Systems
At this stage, we are ready to introduce the notion of transition system which we will use in order to describe the traces generated by a program of our concurrent language. Among these execution traces, one wishes to distinguish (1) the traces which terminate and return from (2) the other traces which are not yet finished or terminate and abort. This leads us to the following definition of transition system:
Definition 9 (Transition Systems)
A transition system is a set of traces closed under prefix, together with a subset , whose traces are said to return.
We explain below how to lift to transition systems the algebraic operations defined on traces in the previous section §4.
Definition 10
The sequential composition of two transition systems and , is defined as the transition system below:
Definition 11
The parallel composition of two transition systems and , is defined as the transition system below:
Definition 12
The transition system associated to a transition system and to a lock is defined as follows:
Note that every instruction induces a transition system defined in the following way:
The intuition is that the program interpreted by executes the instruction after the environment has made the transition and returns when the machine step is succesful, and does not abort. The following algebraic operation on transition systems reflects the computational situation of a program taking a lock before executing, and releasing the lock in case the program returns.
Definition 13
The transition system associated to a transition system and to a lock is defined as follows:
The following operation on transition systems will enable us to interpret conditional branching on concurrent programs.
Definition 14
The transition system associated to a transition system and a predicate on memory states is defined as follows:
where denotes the first state played by Code in the trace .
The transition system is defined similarly, by replacing by in the definition. A subtle but important aspect of the interpretation of conditional branching in the language is that the evaluation of a boolean expression may not succeed, typically because one of its variables is not allocated. In that case, the evaluation produces an exception which is then handled by the operating system. This case is handled in our trace semantics by the definition of a dedicated transition system called , whose construction is detailed in the Appendix[1].
6 Trace semantics of the concurrent language
Now that we have defined the basic operations on transition systems, we are ready to define the operational and interactive semantics of our concurrent language. The language is constructed with Boolean expressions , arithmetic expressions and commands , using the grammar below:
The parallel composition operator enables the two programs and to interact concurrently through mutexes called resources. A resource is declared using and acquired using , which waits for the Boolean expression to be true in order to proceed. Of course, a mutex can be held by at most one execution thread at any one time.
In the semantic approach we are following, every command is translated into a transition system which describes the possible interactive executions of , and whether they return.
[column sep=3em] Code C [rr,"translation"] & & Transition system ⟦C⟧
The interpretation is defined by structural induction on the syntax of the command . To each leaf node , one associates an instruction
which defines the transition system The semantics of nonleaf commands is then defined using the algebraic operations on transition systems introduced in §5:
where in the last part of the definition, and finally
and the while loop
is defined as the least fixpoint of the continuous function below:
7 Logical States
As we explained in the introduction, reasoning about concurrent programs in separation logic requires introducing an appropriate notion of logical state, including information about permissions. The version of concurrent separation logic we consider is almost the same as in its original formulation by O’Hearn and Brookes [10, 5]. One difference is that we benefit from the work in [3, 4, 11] and use the permissions and the predicate in order to handle the heap as well as variables in the stack. So, we suppose given an arbitrary partial cancellative commutative monoid that we call the permission monoid, following [3]. We require that the permission monoid contains a distinguished element which does not admit any multiple, ie. The idea is that the permission is required for a program to write somewhere in memory. The property above ensures that a piece of state cannot be written and accessed (with a read or a write) at the same time by two concurrent programs, and therefore, that there is memory safety and no data race in the semantics. The set of logical states is defined in a similar way as the set of memory states, with the addition of permissions:
One main benefit of permissions is that they enable us to define a separation tensor product between two logical states and . When it is defined, the logical state is defined as a partial function with domain
in the following way, for :
The tensor product of the two logical states and is not defined otherwise. In other words, if the tensor product is well defined, then the memory states underlying and agree on the values of the shared variables and heap locations. The syntax and the semantics of the formulas of Concurrent Separation Logic is the same as in Separation Logic. The grammar of formulas is the following one:
The semantics of the formulas is expressed as the satisfaction predicate defined in Figure 1.
The proof system underlying concurrent separation logic is a sequent calculus on sequents defined as Hoare triples of the form
where , , are predicates, and is a context, defined as a partial function with finite domain from the set of resource variables to predicates. Intuitively, the context describes the invariant satisfied by the resource variable . The purpose of these resources is to provide the fragments of memory shared between the various threads during the execution. The inference rules are given in Figure 2. The inference rule Res associated to moves a piece of memory which is owned by the Code into the shared context , which means it can be be accessed concurrently inside . However, the access to said piece of memory is mediated by the construct, which grants temporary access under the condition that one must give it back (rule With). Notice that in the rule Conj, the context is required to be precise, in the sense that each of the predicates is precise.
Definition 15 (Precise predicate)
A predicate is precise when, for any , there exists at most one such that and .
8 Separated states
We now introduce our third notion of state, which display which region of (logical) memory belongs to the Code, which region belongs to the Frame, and which region is shared. We suppose given a finite set of resource variables.
Definition 16
The separated states are the triples
such that the state below is defined:
where
We say that a separated state combines into a machine state precisely when both and the memory state is equal to the image of
(3) 
under the function which forgets the permissions. Note that by definition, every separated state combines into a unique machine state, which we write for concision
9 The graphs of machine and separated states
In this section, we introduce the two labeled graphs and of machine states and of separated states, and construct a graph homomorphism
(4) 
which maps every separated state to its combined machine state , in the way described in the introduction.
Definition 17
The graph of machine states is the graph whose vertices are the machine states and whose edges are either Code or Environment transitions of the following kind:

a Code transition for every machine step ,

an Environment transition for every pair of machine states, and where is just a tag indicating that the transition has been fired by the Environment.
Note that a trace (see Def. 3) is the same thing as an alternating path starting and ending with an Environment edge in the graph .
Definition 18
The graph of separated states is the graph whose vertices are the separated states and whose edges are either Eve moves or Adam moves of the following kind:

Eve moves of the form
labeled by an instruction such that
between machine states, and such that the following conditions on locked resources are moreover satisfied:

Adam moves of the form
where is just a tag, and moreover
The definition of the vertices and of the edges of the graph of separated states is designed to ensure that there exists a graph homomorphism (4) which maps every Eve move to a Code transition, and every Adam move to an Environment transition. The graph homomorphism (4) enables us to study how an execution trace defined as a path in may be “refined” into a separated execution trace living in the graph of of separated states, and such that . In that situation, we use the following terminology:
Definition 19
We say that a path in the labeled graph combines into a trace in the labeled graph when .
Note that a path which combines into a trace is alternated between Eve and Adam moves, and that it starts and stops with an Adam move.
10 Separation games
In this section, we explain how to associate to every trace a separation game on which Eve and Adam interact and try to “justify” every transition played in the execution trace by the Code or by the Environment, by lifting it to a separated execution trace which combines into .
Definition 20 (Game)
A game is a triple consisting of a graph with source and target functions , and whose edges are called moves: of a function which assigns a polarity to every move played by Eve (Player) and to every move played by Adam (Opponent); of a prefixclosed set of finite paths, called the plays of the game . One requires moreover that every play of the game
is alternating in the sense that
Comments
There are no comments yet.