1 Introduction
Programming efficient multithreaded programs generally involves carefully organizing shared memory accesses to facilitate interthread communication while avoiding synchronization bottlenecks. Modern software platforms like Java include reusable abstractions which encapsulate lowlevel shared memory accesses and synchronization into familiar highlevel abstract data types (ADTs). These socalled concurrent objects typically include mutualexclusion primitives like locks, numeric data types like atomic integers, as well as collections like sets, keyvalue maps, and queues; Java’s standardedition platform contains many implementations of each. Such objects typically provide strong consistency guarantees like linearizability [18], ensuring that each operation appears to happen atomically, witnessing the atomic effects of predecessors according to some linearization order among concurrentlyexecuting operations.
While such strong consistency guarantees are ideal for logical reasoning about programs which use concurrent objects, these guarantees are too strong for many operations, since they preclude simple and/or efficient implementation — over half of Java’s concurrent collection methods forego atomicity for weakconsistency [13]. On the one hand, basic operations like the get and put methods of keyvalue maps typically admit relativelysimple atomic implementations, since their behaviors essentially depend upon individual memory cells, e.g., where the relevant keyvalue mapping is stored. On the other hand, making aggregate operations like size and contains (value) atomic would impose synchronization bottlenecks, or otherwisecomplex control structures, since their atomic behavior depends simultaneously upon the values stored across many memory cells. Interestingly, such implementations are not linearizable even when their underlying memory operations are sequentially consistent, e.g., as is generally the case with Java’s concurrent collections, whose memory accesses are datarace free.
For instance, the contains (value) method of Java’s concurrent hash map iterates through keyvalue entries without blocking concurrent updates in order to avoid unreasonable performance bottlenecks. Consequently, in a given execution, a containsvalue operation will overlook operation ’s concurrent insertion of for a key it has already traversed. This oversight makes it possible for to conclude that value is not present, and can only be explained by being linearized before . In the case that operation removes concurrently before reaches key , but only after completes, then atomicity is violated since in every possible linearization, either mapping or is always present. Nevertheless, such weaklyconsistent operations still offer guarantees, e.g., that values never present are never observed, and initiallypresent values not removed are observed.
In this work we develop a methodology for proving that concurrentobject implementations adhere to the guarantees prescribed by their weakconsistency specifications. The key salient aspects of our approach are the lifting of existing sequential ADT specifications via visibility relaxation [13], and the harnessing of simple and mechanizable reasoning based on forward simulation [25] by relaxedvisibility ADTs. Effectively, our methodology extends the predominant forwardsimulation based linearizabilityproof methodology to concurrent objects with weaklyconsistent operations, and enables automation for proving weakconsistency guarantees.
To enable the harnessing of existing sequential ADT specifications, we adopt the recent methodology of visibility relaxation [13]. As in linearizability [18], the return value of each operation is dictated by the atomic effects of its predecessors in some (i.e., existentially quantified) linearization order. To allow consistency weakening, operations are allowed, to a certain extent, to overlook some of their linearizationorder predecessors, behaving as if they had not occurred. Intuitively, this (also existentially quantified) visibility captures the inability or unwillingness to atomically observe the values stored across many memory cells. To provide guarantees, the extent of visibility relaxation is bounded to varying degrees. Notably, the visibility of an absolute operation must include all of its linearizationorder predecessors, while the visibility of a monotonic operation must include all happensbefore predecessors, along with all operations visible to them. The majority of Java’s concurrent collection methods are absolute or monotonic [13]. For instance, in the containsvalue example described above, by considering that operation is not visible to , the conclusion that is not present can be justified by the linearization , in which sees ’s removal of yet not ’s insertion of . Ascribing the monotonic visibility to the containsvalue method amounts to a guarantee that initiallypresent values are observed unless removed (i.e., concurrently).
While relaxedvisibility specifications provide a means to describing the guarantees provided by weaklyconsistent concurrentobject operations, systematically establishing implementations’ adherence requires a strategy for demonstrating simulation [25], i.e., that each step of the implementation is simulated by some step of (an operational representation of) the specification. The crux of our contribution is thus threefold: first, to identify the relevant specificationlevel actions with which to relate implementationlevel transitions; second, to identify implementationlevel annotations relating transitions to specificationlevel actions; and third, to develop strategies for devising such annotations systematically. For instance, the existing methodology based on linearization points [18] essentially amounts to annotating implementationlevel transitions with the points at which its specificationlevel action, i.e., its atomic effect, occurs. Relaxedvisibility specifications require not only a witness for the existentiallyquantified linearization order, but also an existentiallyquantified visibility relation, and thus requires a second kind of annotation to signal operations’ visibilities. We propose a notion of visibility actions which enable operations to assert their visibility of others, e.g., signaled for the writers of memory cells it has read.
The remainder of our approach amounts to devising a systematic means for constructing simulation proofs to enable automated verification. Essentially, we identify a strategy for systematically annotating implementations with visibility actions, given linearizationpoint annotations and visibility bounds (i.e., absolute or monotonic), and then encode the corresponding simulation check using an offtheshelf verification tool. For the latter, we leverage civl [16], a language and verifier for OwickiGries style modular proofs of concurrent programs with arbitrarilymany threads. In principle, since our approach reduces simulation to safety verification, any safety verifier could be used, though civl facilitates reasoning for multithreaded programs by capturing interference at arbitrary program points. Using civl, we have verified monotonicity of the containsvalue and size methods of Java’s concurrent hashmap and concurrent linkedqueue, respectively — and absolute consistency of add and remove operations. Although our models are written in civl and assume sequentiallyconsistent memory accesses, they capture the difficult aspects of weakconsistency in Java, including heapbased memory access; furthermore, our models are also sound with respect to Java’s weak memory model, since their actual Java implementations guarantee datarace freedom by accessing individual sharedmemory cells with atomic operations via volatile variables and compareandswap instructions.
In summary, we present the first methodology for verifying weaklyconsistent operations using sequential specifications and forward simulation. Contributions include:

the formalization of our methodology over a general notion of transition systems, agnostic to any particular programming language or memory model (§3);

the application of our methodology to verifying a weaklyconsistent containsvalue method of a keyvalue map (§4); and

a mechanization of our methodology used for verifying models of weaklyconsistent Java methods using automated theorem provers (§5).
Aside from the outline above, this article summarizes an existing weakconsistency specification methodology via visibility relaxation (§2), summarizes related work (§6), and concludes (§7). Proofs to all theorems and lemmas are listed in Appendix 0.A.
2 Weak Consistency
Our methodology for verifying weaklyconsistent concurrent objects relies both on the precise characterization of weak consistency specifications, as well as a proof technique for establishing adherence to specifications. In this section we recall and outline a characterization called visibility relaxation [13], an extension of sequential abstract data type (ADT) specifications in which the return values of some operations may not reflect the effects of previouslyeffectuated operations.
Notationally, in the remainder of this article, denotes the empty sequence, denotes the empty set, denotes an unused binding, and and denote the Boolean values true and false, respectively. We write to denote the inclusion of a tuple in the relation ; and to denote the extension of to include ; and to denote the projection of to set ; and to denote the complement of ; and to denote the image of on ; and to denote the preimage of on ; whether refers to inclusion or an image will be clear from its context. Finally, we write to refer to the th element of tuple .
2.1 WeakVisibility Specifications
For a general notion of ADT specifications, we consider fixed sets and of method names and argument or return values, respectively. An operation label is a method name along with argument and return values . A readonly predicate is a unary relation on operation labels, an operation sequence is a sequence of operation labels, and a sequential specification is a set of operation sequences. We say that is compatible with when is closed under deletion of readonly operations, i.e., when and .
Example 1.
The keyvalue map ADT sequential specification is the prefixclosed set containing all sequences such that is either:

, and iff some follows any prior ;

, and iff no other follows some prior ;

, and no nor follows some prior , and if no such exists; or

, and iff no prior nor follows prior .
The readonly predicate holds for the following cases:
This is a simplification of Java’s Map ADT, i.e., with fewer methods.^{1}^{1}1For brevity, we abbreviate Java’s remove and containsvalue methods by rem and has.
To derive weak specifications from sequential ones, we consider a set of exactly two visibility labels from prior work [13]: absolute and monotonic.^{2}^{2}2Previous work refers to absolute visibility as complete, and includes additional visibility labels. Intuitively, absolute visibility requires operations to observe the effects of all of their linearizationorder predecessors, while monotonic visibility allows them to ignore effects which have been ignored by happensbefore (i.e., program and synchronizationorder) predecessors. A visibility annotation maps each method to a visibility .
Definition 1.
A weakvisibility specification is a sequential specification with a compatible readonly predicate and a visibility annotation .
Example 2.
We ascribe semantics to specifications by characterizing the values returned by concurrent method invocations, given constraints on invocation order. In practice, the happensbefore order among invocations is determined by a program order, i.e., among invocations of the same thread, and a synchronization order, i.e., among invocations of distinct threads accessing the same atomic objects, e.g., locks. A history is a set of numeric operation identifiers, along with an invocation function mapping operations to method names and argument values, a partial return function mapping operations to return values, and a (strict) partial happensbefore relation ; the empty history has . An operation is complete when is defined, and is otherwise incomplete; then is complete when each operation is. The label of a complete operation with and is .
To relate operations’ return values in a given history back to sequential specifications, we consider certain sequencings of those operations. A linearization of a history is a total order over which includes , and a visibility projection of maps each operation to a subset of the operations preceding in ; note that means observes . For a given readonly predicate , we say ’s visibility is monotonic when it includes every happensbefore predecessor, and operation visible to a happensbefore predecessor, which is not readonly,^{3}^{3}3For convenience we rephrase Emmi and Enea [13]’s notion to ignore readonly predecessors. i.e., . We says ’s visibility is absolute when , and is itself absolute when each is. An abstract execution is a history along with a linearization of , and a visibility projection of . An abstract execution is sequential when is total, complete when is, and absolute when is.
Example 3.
An abstract execution can be defined using the linearization^{4}^{4}4For readability, we list linearization sequences with operation labels in place of identifiers.
along with a happensbefore order that compared to linearization order, keeps unordered w.r.t. and , and a visibility projection where the visibility of every put and get includes all the linearization predecessors and the visibility of consists of and .
To determine the consistency of individual histories against weakvisibility specifications, we consider adherence of their corresponding abstract executions. Let be a history and a complete abstract execution. Then is consistent with a visibility annotation and readonly predicate if for each operation with : is absolute or monotonic, respectively, according to . The labeling of a total order of complete operations is the sequence of operation labels, i.e., is the label of . Then is consistent with a sequential specification when the labeling^{5}^{5}5As is standard, adequate labelings of incomplete executions are obtained by completing each linearized yet pending operation with some arbitrarilychosen return value [18]. It is sufficient that one of these completions be included in the sequential specification. of is included in , for each operation .^{6}^{6}6We consider a simplification from prior work [13]: rather than allowing the observers of a given operation to pretend they see distinct return values, we suppose that all observers agree on return values. While this is more restrictive in principle, it is equivalent for the simple specifications studied in this article. Finally, we say is consistent with a weakvisibility specification when it is consistent with , , and .
Example 4.
Remark 1.
Consistency models suited for modern software platforms like Java are based on happensbefore relations which abstract away from realtime execution order. Since happensbefore, unlike realtime, is not necessarily an interval order, the composition of linearizations of two distinct objects in the same execution may be cyclic, i.e., not linearizable. Recovering compositionality in this setting is orthogonal to our work of proving consistency against a given model, and is explored elsewhere [11].
The abstract executions of a weakvisibility specification include those complete, sequential, and absolute abstract executions derived from sequences of , i.e., when then each labels each by , and orders iff . In addition, when includes an abstract execution with , then also includes any:

execution such that and ; and

consistent execution with and .
Note that while happensbefore weakening always yields consistent executions, unguarded visibility weakening generally breaks consistency with visibility annotations and sequential specifications: visibilities can become nonmonotonic, and return values can change when operations observe fewer operations’ effects.
Lemma 1.
The abstract executions of a specification are consistent with .
Example 5.
The abstract executions of include the complete, sequential, and absolute abstract execution defined by the following happensbefore order
which implies that it also includes one in which just the happensbefore order is modified such that becomes unordered w.r.t. and . Since it includes the latter, it also includes the execution in Example 3 where the visibility of has is weakened which also modifies its return value from to .
Definition 2.
The histories of a weakvisibility specification are the projections of its abstract executions.
2.2 Consistency against WeakVisibility Specifications
To define the consistency of implementations against specifications, we leverage a general model of computation to capture the behavior of typical concurrent systems, e.g., including multiprocess and multithreaded systems. A sequencelabeled transition system is a set of states, along with a set of actions, initial state and transition relation . An execution is an alternating sequence of states and action sequences starting with such that for each . The trace of the execution is its projection to individual actions.
To capture the histories admitted by a given implementation, we consider sequencelabeled transition systems (SLTSs) which expose actions corresponding to method call, return, and happensbefore constraints. We refer to the actions call, ret, and hb, for , , and , as the history actions, and a history transition system is an SLTS whose actions include the history actions. We say that an action over operation identifier is an action, and assume that executions are well formed in the sense that for a given operation identifier : at most one call action occurs, at most one ret action occurs, and no ret nor hb actions occur prior to a call action. Furthermore, we assume call actions are enabled, so long as no prior call action has occurred. The history of a trace is defined inductively by , where is the empty history, and,
where , and is a call, ret, or hb action, and is not. An implementation is a history transition system, and the histories of are those of its traces. Finally, we define consistency against specifications via history containment.
Definition 3.
Implementation is consistent with specification iff .
3 Establishing Consistency with Forward Simulation
To obtain a consistency proof strategy, we more closely relate implementations to specifications via their admitted abstract executions. To capture the abstract executions admitted by a given implementation, we consider SLTSs which expose not only historyrelated actions, but also actions witnessing linearization and visibility. We refer to the actions lin and vis for , along with the history actions, as the abstractexecution actions, and an abstractexecution transition system (AETS) is an SLTS whose actions include the abstractexecution actions. Extending the corresponding notion from history transition systems, we assume that executions are well formed in the sense that for a given operation identifier : at most one lin action occurs, and no lin or vis actions occur prior to a call action. The abstract execution of a trace is defined inductively by , where is the empty execution, and,
where , and is a call, ret, hb, lin, or vis action, is not, and is a call, ret, or hb action. A witnessing implementation is an abstractexecution transition system, and the abstract executions of are those of its traces.
We adopt forward simulation [25] for proving consistency against weakvisibility specifications. Formally, a simulation relation from one system to another is a binary relation such that initial states are related, , and: for any pair of related states and sourcesystem transition , there exists a targetsystem transition such that and . We say simulates and write when a simulation relation from to exists.
We derive transition systems to model consistency specifications in simulation. The following lemma establishes the soundness and completeness of this substitution, and the subsequent theorem asserts the soundness of the simulationbased proof strategy.
Definition 4.
The transition system of a weakvisibility specification is the AETS whose actions are the abstract execution actions, whose states are abstract executions, whose initial state is the empty execution, and whose transitions include iff and is consistent with .
Lemma 2.
A weakvisibility spec. and its transition system have identical histories.
Theorem 3.1.
A witnessing implementation is consistent with a weakvisibility specification if the transition system of simulates .
Our notion of simulation is in some sense complete when the sequential specification of a weakconsistency specification is returnvalue deterministic, i.e., there is a single label such that for any method , argumentvalue , and admitted sequence . In particular, simulates any witnessing implementation whose abstract executions are included in ^{7}^{7}7This is a consequence of a generic result stating that the set of traces of an LTS is included in the set of traces of an LTS iff simulates , provided that is deterministic [25].. This completeness, however, extends only to inclusion of abstract executions, and not all the way to consistency, since consistency is defined on histories, and any given operation’s return value is not completely determined by the other operation labels and happensbefore relation of a given history: return values generally depend on linearization order and visibility as well. Nevertheless, sequential specifications typically are returnvalue deterministic, and we have used simulation to prove consistency of Javainspired weaklyconsistent objects.
Establishing simulation for an implementation is also helpful when reasoning about clients of a concurrent object. One can use the specification in place of the implementation and encode the client invariants using the abstract execution of the specification in order to prove client properties, following Sergey et al.’s approach [35].
3.1 Reducing Consistency to Safety Verification
Proving simulation between an implementation and its specification can generally be achieved via product construction: complete the transition system of the specification, replacing nonenabled transitions with errorstate transitions; then ensure the synchronized product of implementation and completedspecification transition systems is safe, i.e., no error state is reachable. Assuming that the individual transition systems are safe, then the product system is safe iff the specification simulates the implementation. This reduction to safety verification is also generally applicable to implementation and specification programs, though we limit our formalization to their underlying transition systems for simplicity. By Corollary 1, such reductions enable consistency verification with existing safety verification tools.
3.2 Verifying Implementation Programs
While Theorem 3.1 establishes forward simulation as a strategy for proving the consistency of implementations against weakvisibility specifications, its application to realworld implementations requires programlevel mechanisms to signal the underlying AETS lin and vis actions. To apply forward simulation, we thus develop a notion of programs whose commands include such mechanisms.
This section illustrates a toy programming language with AETS semantics which provides these mechanisms. The key features are the and program commands, which emit linearization and visibility actions for the currentlyexecuting operation, along with , , and (compareandswap) commands, which record and return the set of operation identifiers having written to each memory cell. Such augmented memory commands allow programs to obtain handles to the operations whose effects it has observed, in order to signal the corresponding vis actions.
While one can develop similar mechanisms for languages with any underlying memory model, the toy language presented here assumes a sequentiallyconsistent memory. Note that the assumption of sequentiallyconsistent memory operations is practically without loss of generality for Java’s concurrent collections since they are designed to be datarace free — their weak consistencies arise not from weakmemory semantics, but from nonatomic operations spanning several memory cells.
For generality, we assume abstract notions of commands and memory, using , , , and respectively to denote a program command, memory command, local state, and global memory. So that operations can assert their visibilities, we consider memory which stores, and returns upon access, the identifier(s) of operations which previously accessed a given cell. A program consists of an function mapping method name and argument values to local state , along with a function mapping local state to program command , and and predicates on local states . Intuitively, identifying local states with threads, the idle predicate indicates whether a thread is outside of atomic sections, and subject to interference from other threads; meanwhile the done predicate indicates whether whether a thread has terminated.
The denotation of a memory command is a function from global memory , argument value , and operation to a tuple consisting of a global memory , along with a return value .
Example 6.
A sequentiallyconsistent memory system which records the set of operations to access each location can be captured by mapping addresses to value and operationset pairs , along with three memory commands:
where the compareandswap (CAS) operation stores value at address and returns when was previously stored, and otherwise returns .
The denotation of a program command is a function from local state to a tuple consisting of a memory command and argument value , and a update continuation mapping the memory command’s return value to a pair , where is an updated local state, and maps an operation to an LTS action . We assume the denotation of the ret command yields a local state with without executing memory commands, and outputs a corresponding LTS ret action.
Example 7.
A simple goto language over variables for the memory system of Example 6 would include the following commands:
where the and functions update a program counter, and the load command stores the operation identifier returned from the corresponding memory commands. Linearization and visibility actions are captured as program commands as follows:
Atomic sections can be captured with a lock variable and a pair of program commands,
such that idle states are identified by not holding the lock, i.e., , as in the initial state .
Figure 1 lists the semantics of a program as an abstractexecution transition system. The states of include a global memory , along with a partial function from operation identifiers to local states ; the initial state is , where is an initial memory state. The transitions for call and hb actions are enabled independently of implementation state, since they are dictated by implementations’ environments. Although we do not explicitly model client programs and platforms here, in reality, client programs dictate call actions, and platforms, driven by client programs, dictate hb actions; for example, a client which acquires the lock released after operation , before invoking operation , is generally ensured by its platform that happens before . The transitions for all other actions are dictated by implementation commands. While the ret, lin, and vis commands generate their corresponding LTS actions, all other commands generate transitions.
Each atomic step of the AETS underlying a given program is built from a sequence of steps for the individual program commands in an atomic section. Individual program commands essentially execute one small step from shared memory and local state to , invoking memory command with argument , and emitting action . Besides its effect on shared memory, each step uses the result of memory command to update local state and emit an action using the continuation , i.e., . Commands which do not access memory are modeled by a noop memory commands. We define the consistency of programs by reduction to their transition systems.
Definition 5.
A program is consistent with a specification iff its semantics is.
Thus the consistency of with amounts to the inclusion of ’s histories in ’s. The following corollary of Theorem 3.1 follows directly by Definition 5, and immediately yields a program verification strategy: validate a simulation relation from the states of to the states of such that each command of is simulated by a step of .
Corollary 1.
A program is consistent with specification if simulates .
4 Proof Methodology
In this section we develop a systematic means to annotating concurrent objects for relaxedvisibility simulation proofs. Besides leveraging an auxiliary memory system which tags memory accesses with the operations identifiers which wrote read values (see §3.2), annotations signal linearization points with commands, and indicate visibility of other operations with commands. As in previous works [3, 37, 2, 18] we assume linearization points are given, and focus on visibilityrelated annotations.
As we focus on datarace free implementations (e.g., Java’s concurrent collections) for which sequential consistency is sound, it can be assumed without loss of generality that the happensbefore order is exactly the returnsbefore order between operations, which orders two operations and iff the return action of occurs in realtime before the call action of . This assumption allows to guarantee that linearizations are consistent with happensbefore just by ensuring that the linearization point of each operation occurs in between its call and return action (like in standard linearizability). It is without loss of generality because the clients of such implementations can use auxiliary variables to impose synchronization order constraints between every two operations ordered by returnsbefore, e.g., writing a variable after each operation returns which is read before each other operation is called (under sequential consistency, every write happensbefore every other read which reads the written value).
We illustrate our methodology with the keyvalue map implementation of Figure 2, which models Java’s concurrent hash map. The lines marked in blue and red represent linearization/visibility commands added by the instrumentation that will be described below. Keyvalue pairs are stored in an array table indexed by keys. The implementation of put and get are obvious while the implementation of has returning true iff the input value is associated to some key consists of a while loop traversing the array and searching for the input value. To simplify the exposition, the shared memory reads and writes are already adapted to the memory system described in Section 3.2 (essentially, this consists in adding new variables storing the set of operation identifiers returned by a shared memory read). While put and get are obviously linearizable, has is weakly consistent, with monotonic visibility. For instance, given the two thread program it is possible that get(1) returns 1 while has(1) returns false. This is possible in an interleaving where has reads table[0] before put(0,1) writes into it (observing the initial value 0), and table[1] after put(1,0) writes into it (observing value 0 as well). The only abstract execution consistent with the weaklyconsistent containsvalue map (Example 2) which justifies these return values is given in Example 3. We show that this implementation is consistent with a simplification of the containsvalue map , without remove key operations, and where put operations return no value.
Given an implementation , let be an instrumentation of with program commands lin() emitting linearization actions. The execution of lin() in the context of an operation with identifier emits a linearization action . We assume that leads to wellformed executions (e.g., at most one linearization action per operation).
Example 8.
For the implementation in Figure 2, the linearization commands of put and get are executed atomically with the store to table[k] in put and the load of table[k] in get, respectively. The linearization command of has is executed at any point after observing the input value v or after exiting the loop, but before the return. The two choices correspond to different return values and only one of them will be executed during an invocation.
Given an instrumentation , a visibility annotation for ’s methods, and a readonly predicate , we define a witnessing implementation
according to a generic heuristic that depends only on
and . This definition uses a program command getLin() which returns the set of operations in the current linearization sequence^{8}^{8}8We rely on retrieving the identifiers of currentlylinearized operations. More complex proofs may also require inspecting, e.g., operation labels and happensbefore relationships.. The current linearization sequence is stored in a history variable which is updated with every linearization action by appending the corresponding operation identifier. For readability, we leave this history variable implicit and omit the corresponding updates. As syntactic sugar, we use a command getModLin() which returns the set of modifiers (non readonly operations) in the current linearization sequence. To represent visibility actions, we use program commands vis(A) where A is a set of operation identifiers. The execution of vis(A) in the context of an operation with identifier emits the set of visibility actions for every operation .Therefore, extends the instrumentation with commands generating visibility actions as follows:

for absolute methods, each linearization command is preceded by vis(getLin()) which ensures that the visibility of an invocation includes all the predecessors in linearization order. This is executed atomically with lin().

for monotonic methods, the call action is followed by vis(getModLin()) (and executed atomically with this command) which ensures that the visibility of each invocation is monotonic, and every read of a shared variable which has been written by a set of operations is preceded by vis(O getModLin()) (and executed atomically with this command). The latter is needed so that the visibility of such an invocation contains enough operations to explain its return value (the visibility command attached to call actions is enough to ensure monotonic visibilities).
Example 9.
The blue lines in Figure 2 demonstrate the visibility commands added by the instrumentation to the keyvalue map in Figure 2 (in this case, the modifiers are put operations). The first visibility command in has precedes the procedure body to emphasize the fact that it is executed atomically with the procedure call. Also, note that the read of the array table is the only shared memory read in has.
Theorem 4.1.
The abstract executions of the witnessing implementation are consistent with and .
Proof.
Let be the abstract execution of a trace of , and let be an invocation in of a monotonic method (w.r.t. ). By the definition of , the call action of is immediately followed in by a sequence of visibility actions for every modifier which has been already linearized. Therefore, any operation which has returned before (i.e., happensbefore ) has already been linearized and it will necessarily have a smaller visibility (w.r.t. set inclusion) because the linearization sequence is modified only by appending new operations. The instrumentation of shared memory reads may add more visibility actions but this preserves the monotonicity status of ’s visibility. The case of absolute methods is obvious. ∎
The consistency of the abstract executions of with a given sequential specification , which completes the proof of consistency with a weakvisibility specification , can be proved by showing that the transition system of simulates (Theorem 3.1). Defining a simulation relation between the two systems is in some part implementation specific, and in the following we demonstrate it for the keyvalue map implementation .
We show that simulates implementation . A state of in Figure 2 is a valuation of table and the history variable lin storing the current linearization sequence, and a valuation of the local variables for each active operation. Let denote the set of operations which are active in an implementation state . Also, for a has operation , let be the maximal index of the array table such that has already read and . We assume if did not read any array cell.
Definition 6.
Let be a relation which associates every implementation state with a state of , i.e., an consistent abstract execution with , such that:

is the set of identifiers occurring in or the history variable lin,

for each operation , is defined according to its local state, is undefined, and is maximal in the happensbefore order ,

the value of the history variable lin in equals the linearization sequence ,

every invocation of an absolute method (put or get) has absolute visibility if linearized, otherwise, its visibility is empty,

table is the array obtained by executing the sequence of operations ,

for every linearized get(k) operation , the put(k,_) operation in which occurs last in writes v to key k, where v is the local variable of ,

for every has operation , consists of:

all the put operations which returned before was invoked,

for each , all the put(,_) operations from a prefix of that wrote a value different from v,

all the put(,_) operations from a prefix of that ends with a put(,v) operation, provided that .
Above, the linearization prefix associated to an index should be a prefix of the one associated to .

A large part of this definition is applicable to any implementation, only points (5), (6), and (7) being specific to the implementation we consider. The points (6) and (7) ensure that the return values of operations are consistent with and mimic the effect of the vis commands from Figure 2.
Theorem 4.2.
is a simulation relation from to .
5 Implementation and Evaluation
In this section we effectuate our methodology by verifying two weaklyconsistent concurrent objects: Java’s ConcurrentHashMap and ConcurrentLinkedQueue.^{9}^{9}9
Our verified implementations are open source, and available at:
https://github.com/siddharthkrishna/weakconsistencyproofs. We use an offtheshelf deductive verification tool called civl [16], though any concurrent program verifier could suffice. We chose civl because comparable verifiers either require a manual encoding of the concurrency reasoning (e.g. Dafny or Viper) which can be errorprone, or require cumbersome reasoning about interleavings of threadlocal histories (e.g. VerCors). An additional benefit of civl is that it directly proves simulation, thereby tying the mechanized proofs to our theoretical development. Our proofs assume no bound on the number of threads or the size of the memory.
Our use of civl imposes two restrictions on the implementations we can verify. First, civl uses the OwickiGries method [29] to verify concurrent programs. These methods are unsound for weak memory models [22], so civl, and hence our proofs, assume a sequentiallyconsistent memory model. Second, civl’s strategy for building the simulation relation requires implementations to have staticallyknown linearization points because it checks that there exists exactly one atomic section in each code path where the global state is modified, and this modification is simulated by the specification.
Given these restrictions, we can simplify our proof strategy of forward refinement by factoring the simulations we construct through an atomic version of the specification transition system. This atomic specification is obtained from the specification AETS by restricting the interleavings between its transitions.
Definition 7.
The atomic transition system of a specification is the AETS , where is the AETS of and if and only if and .
Note that the language of is included in the language of and simulation proofs towards apply to as well.
Our civl proofs show that there is a simulation from an implementation to its atomic specification, which is encoded as a program whose state consists of the components of an abstract execution, i.e., . These were encoded as maps from operation identifiers to values, sequences of operation identifiers, and maps from operation identifiers to sets of operation identifiers respectively. Our axiomatization of sequences and sets were adapted from those used by the Dafny verifier [23]. For each method in , we defined atomic procedures corresponding to call actions, return actions, and combined visibility and linearization actions in order to obtain exactly the atomic transitions of .
It is challenging to encode Java implementations faithfully in civl, as the latter’s input programming language is a basic imperative language lacking many Java features. Most notable among these is dynamic memory allocation on the heap, used by almost all of the concurrent data structure implementations. As civl is a firstorder prover, we needed an encoding of the heap that lets us perform reachability reasoning on the heap. We adapted the firstorder theory of reachability and footprint sets from the GRASShopper verifier [30] for dynamically allocated data structures. This fragment is decidable, but relies on local theory extensions [36], which we implemented by using the trigger mechanism of the underlying SMT solver [27, 15] to ensure that quantified axioms were only instantiated for program expressions. For instance, here is the “cycle” axiom that says that if a node x has a field f[x] that points to itself, then any y that it can reach via that field (encoded using the between predicate Btwn(f, x, y, y)) must be equal to x:
We use the trigger known(x), known(y) (known is a dummy function that maps every reference to true) and introduce known(t) terms in our programs for every term t of type Ref (for instance, by adding assert known(t) to the point of the program where t is introduced). This ensures that the cycle axiom is only instantiated for terms that appear in the program, and not for terms that are generated by instantations of axioms (like f[x] in the cycle axiom). This process was key to keeping the verification time manageable.
Since we consider finegrained concurrent implementations, we also needed to reason about interference by other threads and show thread safety. civl provides OwickiGries [29] style threadmodular reasoning, by means of demarcating atomic blocks and providing preconditions for each block that are checked for stability under all possible modifications by other threads. One of the consequences of this is that these annotations can only talk about the local state of a thread and the shared global state, but not other threads. To encode facts such as distinctness of operation identifiers and ownership of unreachable nodes (e.g. newly allocated nodes) in the shared heap, we use civl’s linear type system [40].
For instance, the proof of the push method needs to make assertions about the value of the newlyallocated node x. These assertions would not be stable under interference of other threads if we didn’t have a way of specifying that the address of the new node is known only by the push thread. We encode this knowledge by marking the type of the variable x as linear – this tells civl that all values of x across all threads are distinct, which is sufficient for the proof. civl ensures soundness by making sure that linear variables are not duplicated (for instance, they cannot be passed to another method and then used afterwards).
Module  Code  Proof  Total  Time (s) 
Sets and Sequences    85  85   
Executions and Consistency    30  30   
Heap and Reachability    35  35   
Map ADT  51  34  85   
Arraymap implementation  138  175  313  6 
Queue ADT  50  22  72   
Linked Queue implementation  280  325  605  13 
We evaluate our proof methodology by considering models of two of Java’s weaklyconsistent concurrent objects.
5.0.1 Concurrent Hash Map
One is the ConcurrentHashMap implementation of the Map ADT, consisting of absolute put and get methods and a monotonic has method that follows the algortihm given in Figure 2. For simplicity, we assume keys are integers and the hash function is identity, but note that the proof of monotonicity of has is not affected by these assumptions.
civl can construct a simulation relation equivalent to the one defined in Definition 6 automatically, given an inductive invariant that relates the state of the implementation to the abstract execution. A first attempt at an invariant might be that the value stored at table[k] for every key k is the same as the value returned by adding a get operation on k by the specification AETS. This invariant is sufficient for civl to prove that the return value of the absolute methods (put and get) is consistent with the specification.
However, it is not enough to show that the return value of the monotonic has method is consistent with its visibility. This is because our proof technique constructs a visibility set for has by taking the union of the memory tags (the set of operations that wrote to each memory location) of each table entry it reads, but without additional invariants this visibility set could entail a different return value. We thus strengthen the invariant to say that tableTags[k], the memory tags associated with hash table entry k, is exactly the set of linearized put operations with key k. A consequence of this is that the abstract state encoded by tableTags[k] has the same value for key k as the value stored at table[k]. civl can then prove, given the following loop invariant, that the value returned by has is consistent with its visibility set.
This loop invariant says that among the entries scanned thus far, the abstract map given by the projection of lin to the current operation’s visibility my_vis does not include value v.
5.0.2 Concurrent Linked Queue
Our second case study is the ConcurrentLinkedQueue implementation of the Queue ADT, consisting of absolute push and pop methods and a monotonic size method that traverses the queue from head to tail without any locks and returns the number of nodes it sees (see Figure 4 for the full code). We again model the core algorithm (the MichaelScott queue [26]) and omit some of Java’s optimizations, for instance to speed up garbage collection by setting the next field of popped nodes to themselves, or setting the values of nodes to null when popping values.
The invariants needed to verify the absolute methods are a straightforward combination of structural invariants (e.g. that the queue is composed of a linked list from the head to null, with the tail being a member of this list) and a relation between the abstract and concrete states. Once again, we need to strengthen this invariant in order to verify the monotonic size method, because otherwise we cannot prove that the visibility set we construct (by taking the union of the memory tags of nodes in the list during traversal) justifies the return value.
The key additional invariant is that the memory tags for the next field of each node (denoted x.nextTags for each node x) in the queue contain the operation label of the operation that pushed the next node into the queue (if it exists). Further, the sequence of operations in lin are exactly the operations in the nextTags field of nodes in the queue, and in the order they are present in the queue. Given this invariant, one can show that the return value s computed by size is consistent with the visibility set it constructs by picking up the memory tags from each node that it traverses (the loop invariant is more involved, as due to concurrent updates size could be traversing nodes that have been popped from the queue).
5.0.3 Results
Figure 3 provides a summary of our case studies. We separate the table into sections, one for each case study, and a common section at the top that contains the common theories of sets and sequences and our encoding of the heap. In each case study section, we separate the definitions of the atomic specification of the ADT (which can be reused for other implementations) from the code and proof of the implementation we consider. For each resulting module, we list the number of lines of code, lines of proof, total lines, and civl’s verification time in seconds. Experiments were conducted on an Intel Core i74470 3.4 GHz 8core machine with 16GB RAM.
Our two case studies are representative of the weaklyconsistent behaviors exhibited by all the Java concurrent objects studied in [13], both those using fixedsize arrays and those using dynamic memory. As civl does not direclty support dynamic memory and other Java language features, we were forced to make certain simplifications to the algorithms in our verification effort. However, the assumptions we make are orthogonal to the reasoning and proof of weak consistency of the monotonic methods. The underlying algorithm used by, and hence the proof argument for monotonicity of, hash map’s has method is the same as that in the other monotonic hash map operations such as elements, entrySet, and toString. Similarly, the argument used for the queue’s size can be adapted to other monotonic ConcurrentLinkedQueue and LinkedTransferQueue operations like toArray and toString. Thus, our proofs carry over to the full versions of the implementations as the key invariants linking the memory tags and visibility sets to the specification state are the same.
In addition, civl does not currently have any support for inferring the preconditions of each atomic block, which currently accounts for most of the lines of proof in our case studies. However, these problems have been studied and solved in other tools [30, 39], and in theory can be integrated with civl in order to simplify these kinds of proofs.
In conclusion, our case studies show that verifying weaklyconsistent operations introduces little overhead compared to the proofs of the core absolute operations. The additional invariants needed to prove monotonicity were natural and easy to construct. We also see that our methodology brings weakconsistency proofs within the scope of what is provable by offtheshelf automated concurrent program verifiers in reasonable time.
6 Related Work
Though linearizability [18] has reigned as the defacto concurrentobject consistency criterion, several recent works proposed weaker criteria, including quantitative relaxation [17], quiescent consistency [10], and local linearizability [14]; these works effectively permit externallyvisible interference among threads by altering objects’ sequential specifications, each in their own way. Motivated by the diversity of these proposals, Sergey et al. [35] proposed the use of Hoare logic for describing a custom consistency specification for each concurrent object. Raad et al. [31] continued in this direction by proposing declarative consistency models for concurrent objects atop weakmemory platforms. One common feature between our paper and this line of work (see also [21, 9]) is encoding and reasoning directly about the concurrent history. The notion of visibility relaxation [13] originates from Burckhardt et al.’s axiomatic specifications [7], and leverages traditional sequential specifications by allowing certain operations to behave as if they are unaware of concurrentlyexecuted linearizationorder predecessors. The linearization (and visibility) actions of our simulationproof methodology are unique to visibilityrelaxation based weakconsistency, since they refer to a global linearization order linking executions with sequential specifications.
Typical methodologies for proving linearizability are based on reductions to safety verification [8, 5] and forward simulation [3, 37, 2], the latter generally requiring the annotation of peroperation linearization points, each typically associated with a single program statement in the given operation, e.g., a shared memory access. Extensions to this methodology include cooperation [38, 12, 41], i.e., allowing operations’ linearization points to coincide with other operations’ statements, and prophecy [33, 24], i.e., allowing operation’ linearization points to depend on future events. Such extensions enable linearizability proofs of objects like the HerlihyWing Queue (HWQ). While prophecy [25], alternatively backward simulation [25], is generally more powerful than forward simulation alone, Bouajjani et al. [6] described a methodology based on forward simulation capable of proving seemingly futuredependent objects like HWQ by considering fixed linearization points only for value removal, and an additional kind of specificationsimulated action, commit points, corresponding to operations’ final sharedmemory accesses. Our consideration of specificationsimulated visibility actions follows this line of thinking, enabling the forwardsimulation based proof of weaklyconsistent concurrent objects.
7 Conclusion and Future Work
This work develops the first verification methodology for weaklyconsistent operations using sequential specifications and forward simulation, thus reusing existing sequential ADT specifications and enabling simple reasoning, i.e., without prophecy [1] or backward simulation [25]. While we have already demonstrated the application to absolute and monotonic methods on sequentiallyconsistent memory, our formalization is general, and also applicable to the other visibility relaxations, e.g., the peer and weak visibilities [13], and weaker memory models, e.g., the Java memory model. These extensions amount to devising additional visibilityaction inference strategies (§4), and alternate memorycommand denotations (§3.2).
As with systematic or automated linearizabilityproof methodologies, our proof methodology is susceptible to two potential sources of incompleteness. First, as mentioned in Section 3, methodologies like ours based on forward simulation are only complete when specifications are returnvalue deterministic. However, data types are typically designed to be returnvalue deterministic and this source of incompleteness does not manifest in practice.
Second, methodologies like ours based on annotating program commands, e.g., with linearization points, are generally incomplete since the consistency mechanism employed by any given implementation may not admit characterization according to a given static annotation scheme; the HerlihyWing Queue, whose linearization points depend on the results of future actions, is a prototypical example [18]. Likewise, our systematic strategy for annotating implementations with lin and vis commands (§3) can fail to prove consistency of futuredependent operations. However, we have yet to observe any practical occurrence of such exotic objects; our strategy is sufficient for verifying the weaklyconsistent algorithms implemented in the Java development kit. As a theoretical curiosity for future work, investigating the potential for complete annotation strategies would be interesting, e.g., for restricted classes of data types and/or implementations.
Finally, while civl’s highdegree of automation facilitated rapid prototyping of our simulation proofs, its underlying foundation using OwickiGries style proof rules limits the potential for modular reasoning. In particular, while our weakconsistency proofs are threadmodular, our invariants and intermediate assertions necessarily talk about state shared among multiple threads. Since our simulationbased methodology and annotations are completely orthogonal to the underlying program logic, it would be interesting future work to apply our methodology using expressive logics like RelyGuarantee, e.g. [19, 38], or variations of Concurrent Separation Logic, e.g. [28, 32, 34, 35, 4, 20]. It remains to be seen to what degree increased modularity may sacrifice automation in the application of our weakconsistency proof methodology.
References
 [1] Abadi, M., Lamport, L.: The existence of refinement mappings. Theor. Comput. Sci. 82(2), 253–284 (1991)
 [2] Abdulla, P.A., Haziza, F., Holík, L., Jonsson, B., Rezine, A.: An integrated specification and verification technique for highly concurrent data structures for highly concurrent data structures. STTT 19(5), 549–563 (2017)
 [3] Amit, D., Rinetzky, N., Reps, T.W., Sagiv, M., Yahav, E.: Comparison under abstraction for verifying linearizability. In: CAV. Lecture Notes in Computer Science, vol. 4590, pp. 477–490. Springer (2007)
 [4] Blom, S., Darabi, S., Huisman, M., Oortwijn, W.: The vercors tool set: Verification of parallel and concurrent software. In: IFM. Lecture Notes in Computer Science, vol. 10510, pp. 102–110. Springer (2017)
 [5] Bouajjani, A., Emmi, M., Enea, C., Hamza, J.: On reducing linearizability to state reachability. Inf. Comput. 261(Part), 383–400 (2018)
 [6] Bouajjani, A., Emmi, M., Enea, C., Mutluergil, S.O.: Proving linearizability using forward simulations. In: CAV (2). Lecture Notes in Computer Science, vol. 10427, pp. 542–563. Springer (2017)
 [7] Burckhardt, S., Gotsman, A., Yang, H., Zawirski, M.: Replicated data types: specification, verification, optimality. In: POPL. pp. 271–284. ACM (2014)
 [8] Chakraborty, S., Henzinger, T.A., Sezgin, A., Vafeiadis, V.: Aspectoriented linearizability proofs. Logical Methods in Computer Science 11(1) (2015)
 [9] Delbianco, G.A., Sergey, I., Nanevski, A., Banerjee, A.: Concurrent data structures linked in time. In: ECOOP. LIPIcs, vol. 74, pp. 8:1–8:30. Schloss Dagstuhl  LeibnizZentrum fuer Informatik (2017)
 [10] Derrick, J., Dongol, B., Schellhorn, G., Tofan, B., Travkin, O., Wehrheim, H.: Quiescent consistency: Defining and verifying relaxed linearizability. In: FM. Lecture Notes in Computer Science, vol. 8442, pp. 200–214. Springer (2014)
 [11] Dongol, B., Jagadeesan, R., Riely, J., Armstrong, A.: On abstraction and compositionality for weakmemory linearisability. In: VMCAI. Lecture Notes in Computer Science, vol. 10747, pp. 183–204. Springer (2018)
 [12] Dragoi, C., Gupta, A., Henzinger, T.A.: Automatic linearizability proofs of concurrent objects with cooperating updates. In: CAV. Lecture Notes in Computer Science, vol. 8044, pp. 174–190. Springer (2013)
 [13] Emmi, M., Enea, C.: Weakconsistency specification via visibility relaxation. PACMPL 3(POPL), 60:1–60:28 (2019)
 [14] Haas, A., Henzinger, T.A., Holzer, A., Kirsch, C.M., Lippautz, M., Payer, H., Sezgin, A., Sokolova, A., Veith, H.: Local linearizability for concurrent containertype data structures. In: CONCUR. LIPIcs, vol. 59, pp. 6:1–6:15. Schloss Dagstuhl  LeibnizZentrum fuer Informatik (2016)
 [15] Hawblitzel, C., Petrank, E.: Automated verification of practical garbage collectors. Logical Methods in Computer Science 6(3) (2010)
 [16] Hawblitzel, C., Petrank, E., Qadeer, S., Tasiran, S.: Automated and modular refinement reasoning for concurrent programs. In: CAV (2). Lecture Notes in Computer Science, vol. 9207, pp. 449–465. Springer (2015)
 [17] Henzinger, T.A., Kirsch, C.M., Payer, H., Sezgin, A., Sokolova, A.: Quantitative relaxation of concurrent data structures. In: POPL. pp. 317–328. ACM (2013)
 [18] Herlihy, M., Wing, J.M.: Linearizability: A correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst. 12(3), 463–492 (1990)
 [19] Jones, C.B.: Specification and design of (parallel) programs. In: IFIP Congress. pp. 321–332. NorthHolland/IFIP (1983)
 [20] Jung, R., Krebbers, R., Jourdan, J., Bizjak, A., Birkedal, L., Dreyer, D.: Iris from the ground up: A modular foundation for higherorder concurrent separation logic. J. Funct. Program. 28, e20 (2018)
 [21] Khyzha, A., Dodds, M., Gotsman, A., Parkinson, M.J.: Proving linearizability using partial orders. In: ESOP. Lecture Notes in Computer Science, vol. 10201, pp. 639–667. Springer (2017)
 [22] Lahav, O., Vafeiadis, V.: Owickigries reasoning for weak memory models. In: ICALP (2). Lecture Notes in Computer Science, vol. 9135, pp. 311–323. Springer (2015)
 [23] Leino, K.R.M.: Dafny: An automatic program verifier for functional correctness. In: LPAR (Dakar). Lecture Notes in Computer Science, vol. 6355, pp. 348–370. Springer (2010)
 [24] Liang, H., Feng, X.: Modular verification of linearizability with nonfixed linearization points. In: PLDI. pp. 459–470. ACM (2013)
 [25] Lynch, N.A., Vaandrager, F.W.: Forward and backward simulations: I. untimed systems. Inf. Comput. 121(2), 214–233 (1995)
 [26] Michael, M.M., Scott, M.L.: Simple, fast, and practical nonblocking and blocking concurrent queue algorithms. In: PODC. pp. 267–275. ACM (1996)
 [27] Moskal, M., Lopuszanski, J., Kiniry, J.R.: Ematching for fun and profit. Electr. Notes Theor. Comput. Sci. 198(2), 19–35 (2008)
 [28] O’Hearn, P.W.: Resources, concurrency and local reasoning. In: CONCUR. Lecture Notes in Computer Science, vol. 3170, pp. 49–67. Springer (2004)
 [29] Owicki, S.S., Gries, D.: Verifying properties of parallel programs: An axiomatic approach. Commun. ACM 19(5), 279–285 (1976)
 [30] Piskac, R., Wies, T., Zufferey, D.: Grasshopper  complete heap verification with mixed specifications. In: TACAS. Lecture Notes in Computer Science, vol. 8413, pp. 124–139. Springer (2014)
 [31] Raad, A., Doko, M., Rozic, L., Lahav, O., Vafeiadis, V.: On library correctness under weak memory consistency: specifying and verifying concurrent libraries under declarative consistency models. PACMPL 3(POPL), 68:1–68:31 (2019)
 [32] Reynolds, J.C.: Separation logic: A logic for shared mutable data structures. In: LICS. pp. 55–74. IEEE Computer Society (2002)
 [33] Schellhorn, G., Wehrheim, H., Derrick, J.: How to prove algorithms linearisable. In: CAV. Lecture Notes in Computer Science, vol. 7358, pp. 243–259. Springer (2012)
 [34] Sergey, I., Nanevski, A., Banerjee, A.: Mechanized verification of finegrained concurrent programs. In: PLDI. pp. 77–87. ACM (2015)
 [35] Sergey, I., Nanevski, A., Banerjee, A., Delbianco, G.A.: Hoarestyle specifications as correctness conditions for nonlinearizable concurrent objects. In: OOPSLA. pp. 92–110. ACM (2016)
 [36] SofronieStokkermans, V.: Hierarchic reasoning in local theory extensions. In: CADE. Lecture Notes in Computer Science, vol. 3632, pp. 219–234. Springer (2005)
 [37] Vafeiadis, V.: Shapevalue abstraction for verifying linearizability. In: VMCAI. Lecture Notes in Computer Science, vol. 5403, pp. 335–348. Springer (2009)
 [38] Vafeiadis, V.: Automatically proving linearizability. In: CAV. Lecture Notes in Computer Science, vol. 6174, pp. 450–464. Springer (2010)
 [39] Vafeiadis, V.: Rgsep action inference. In: VMCAI. Lecture Notes in Computer Science, vol. 5944, pp. 345–361. Springer (2010)
 [40] Wadler, P.: Linear types can change the world! In: Programming Concepts and Methods. p. 561. NorthHolland (1990)
 [41] Zhu, H., Petri, G., Jagannathan, S.: Poling: SMT aided linearizability proofs. In: CAV (2). Lecture Notes in Computer Science, vol. 9207, pp. 3–19. Springer (2015)
Appendix 0.A Proofs to Theorems and Lemmas
Lemma 1.
The abstract executions of a specification are consistent with .
Proof.
Any complete, sequential, and absolute execution is consistent by definition, since the labeling of its linearization is taken from the sequential specification. Then, any happensbefore weakening is consistent for exactly the same reason as its source execution, since its linearization and visibility projection are both identical. Finally, any visibility weakening is consistent by the condition of consistency in its definition. ∎
Lemma 2.
A weakvisibility specification and its transition system have identical histories.
Proof.
It follows almost immediately that the abstract executions of are identical to those of , since ’s state effectively records the abstract execution of a given AETS execution, and only enables those returns that are consistent with . Since histories are the projections of abstract executions, the corresponding history sets are also identical. ∎
Theorem 3.1.
A witnessing implementation is consistent with a weakvisibility specification if the transition system of simulates .
Proof.
This follows from standard arguments, given that the corresponding SLTSs include transitions to ensure that every move of one system can be matched by stuttering from the other: since both systems synchronize on the call, ret, hb, lin, and vis actions, the simulation guarantees that every abstract execution, and thus history, of is matched by one of . Then by Lemma 2, the histories of are included in . ∎
Theorem 4.2.
is a simulation relation from to .
Sketch.
We show that every step of the implementation, i.e., an atomic section or a program command, is simulated by . Given , we consider the different implementation steps which are possible in .
The case of commands corresponding to procedure calls of put and get is trivial. Executing a procedure call in leads to a new state which differs only by having a new active operation . We have that and where is obtained from by adding with an appropriate value of and an empty visibility.
The transition corresponding to the atomic section of put is labeled by a sequence of visibility actions (one for each linearized operation) followed by a linearization action. Let denote this sequence of actions. This transition leads to a state where the array table may have changed (unless writing the same value), and the history variable lin is extended with the put operation executing this step. We define an abstract execution from by changing to the new value of lin, and defining an absolute visibility for . We have that because is consistent with . Also, because the validity of (3), (4), and (5) follow directly from the definition of . The atomic section of get can be handled in a similar way. The simulation of return actions of get operations is a direct consequence of point (6) which ensures consistency with .
For has, we focus on the atomic sections containing vis commands and the linearization commands (the other internal steps are simulated by steps of , and the simulation of the return step follows directly from (7) which justifies the consistency of the return value). The atomic section around the procedure call corresponds to a transition labeled by a sequence of visibility actions (one for each linearized modifier) and leads to a state with a new active has operation (compared to ). We have that because is consistent with . Indeed, the visibility of in is not constrained since has not been linearized and the consistency of follows from the consistency of . Also, because and (7) is clearly valid. The atomic section around the read of table[k] is simulated by in a similar way, noticing that (7) models precisely the effect of the visibility commands inside this atomic section. For the simulation of the linearization commands is important to notice that any active has operation in has a visibility that contains all modifiers which returned before it was called and as explained above, this visibility is monotonic. ∎
Comments
There are no comments yet.