Order out of Chaos: Proving Linearizability Using Local Views

by   Yotam M. Y. Feldman, et al.

Proving the linearizability of highly concurrent data structures, such as those using optimistic concurrency control, is a challenging task. The main difficulty is in reasoning about the view of the memory obtained by the threads, because as they execute, threads observe different fragments of the data structure from different points in time. Until today, every linearizability proof has tackled this challenge from scratch. We present a unifying proof argument capable of proving the linearizability of several highly concurrent data structures, including an optimistic self-balancing binary search tree and the Lazy List algorithm. Our framework facilitates sequential reasoning about the view of a thread, as if it traverses the data structure without interference from other operations. Our key contribution is showing that properties of reachability along search paths can be deduced for concurrent traversals from interference-free traversals. This greatly simplifies linearizability proofs. At the heart of our proof method lies a notion of order on the memory, corresponding to the order in which locations in memory are read by the threads, which guarantees a certain notion of consistency between the view of the thread and the actual memory. To apply our framework, the user proves that the data structure satisfies certain conditions, relating to acyclicity of the data structure and the preservation of search paths to locations affected by interfering writes. Establishing the conditions, as well as the full linearizability proof, reduces to simple concurrent reasoning. The result is a clear and comprehensible correctness proof. Our framework elucidates common patterns underlying several existing data structures, and could pave the way to design new data structures based on these principles.



page 4

page 26

page 33

page 34

page 35

page 36


Proving Highly-Concurrent Traversals Correct

Modern highly-concurrent search data structures, such as search trees, o...

Go with the Flow: Compositional Abstractions for Concurrent Data Structures (Extended Version)

Concurrent separation logics have helped to significantly simplify corre...

Reducing Commutativity Verification to Reachability with Differencing Abstractions

Commutativity of data structure methods is of ongoing interest, with roo...

Verifying Concurrent Multicopy Search Structures

Multicopy search structures such as log-structured merge (LSM) trees are...

Generating Concurrent Programs From Sequential Data Structure Knowledge Using Answer Set Programming

We tackle the problem of automatically designing concurrent data structu...

Generating Concurrent Programs From Sequential Data Structure Knowledge

In this paper we tackle the problem of automatically designing concurren...

Learning Key-Value Store Design

We introduce the concept of design continuums for the data layout of key...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Concurrent data structures must minimize synchronization to obtain high performance [16, 28]. Many concurrent search data structures therefore use optimistic designs, which search the data structure without locking or otherwise writing to memory, and write to shared memory only when modifying the data structure. Thus, in these designs, operations that do not modify the same nodes do not synchronize with each other; in particular, searches can run in parallel, allowing for high performance and scalability. Optimistic designs are now common in concurrent search trees [3, 10, 11, 14, 17, 19, 29, 37, 42], skip lists [13, 21, 27], and lists/hash tables [23, 24, 36, 46].

A major challenge in developing an optimistic search data structure is proving linearizability [26], i.e., that every operation appears to take effect atomically at some point in time during its execution. Usually, the key difficulty is proving properties of unsynchronized searches [38, 33, 49, 28], as they can observe an inconsistent state of the data structure—for example, due to observing only some of the writes performed by an update operation, or only some update operations but not others. Arguing about such searches requires tricky concurrent reasoning about the possible interleaving of reads and writes of the operations. Today, every new linearizability proof tackles these problems from scratch, leading to long and complex proofs.

Our approach: local view arguments. This paper presents a unifying proof argument for proving linearizability of concurrent data structures with unsynchronized searches that replaces the difficult concurrent reasoning described above with sequential reasoning about a search, which does not consider interference from other operations. Our main contribution is a framework for establishing properties of an unsynchronized search in a concurrent execution by reasoning only about its local view—the (potentially inconsistent) picture of memory it observes as it traverses the data structure. We refer to such proofs as local view arguments. We show that under two (widely-applicable) conditions listed below, the existence of a path to the searched node in the local view, deduced with sequential reasoning, also holds at some point during the actual (concurrent) execution of the traversal. (This includes the case of non-existence of a key indicated by a path to .) Such reachability properties are typically key to the linearizability proofs of many prominent concurrent search data structures with unsynchronized searches [16]. Once these properties are established, the rest of the linearizability proof requires only simple concurrent reasoning.

Applying a local view argument requires establishing the following two conditions: (i) temporal acyclicity, which states that the search follows an order on the memory that is acyclic across intermediate states throughout the concurrent execution; and (ii) preservation, which states that whenever a node is changed, if it was on a search path for some key in the past, then it is also on such a search path at the time of the change. Although these conditions refer to concurrent executions, proving them for the data structures we consider is straightforward.

More generally, these conditions can be established with inductive proofs that are simplified by relying on the very same traversal properties obtained with the local view argument. This seemingly circular reasoning holds because our framework is also proven inductively, and so the case of executions of length in both the proof that (1) the data structure satisfies the conditions and (2) the traversal properties follow from the local view argument can rely on the correctness of the other proof’s case.

Simplifying linearizability proofs with local view arguments. To harness local view arguments, our approach uses assertions in the code as a way to divide the proof between (1) the linearizability proof that relies on the assertions, and (2) the proof of the assertions, where the challenge of establishing properties of unsynchronized searches in concurrent executions is overcome by local view arguments. inlineinlinetodo: inlineYF: Omitted:
We illustrate this approach with a proof of the linearizability of a self-balancing binary search tree (Sec. 2), using local view arguments (Sec. 3).

Overall, our proof argument yields clear and comprehensible linearizability proofs, whose whole is (in some sense) greater than the sum of the parts, since each of the parts requires a simpler form of reasoning compared to contemporary linearizability proofs. We use local view arguments to devise simple linearizability proofs of a variant of the contention-friendly tree [14] (a self-balancing search tree), lists with lazy [24] or non-blocking [28] synchronization, and a lock-free skip list.

Our framework’s acyclicity and preservation conditions can provide insight on algorithm design, in that their proofs can reveal unnecessary protections against interference. Indeed, our proof attempts exposed (small) parts of the search tree algorithm that were not needed to guarantee linearizability, leading us to consider a simpler variant of its search operation (see creftypecap 1).

Contributions. To summarize, we make the following contributions:

  1. We provide a set of conditions under which reachability properties of local views, established using sequential reasoning, hold also for concurrent executions,

  2. We show that these conditions hold for non-trivial concurrent data structures that use unsynchronized searches, and

  3. We demonstrate that the properties established using local view arguments enable simple linearizability proofs, alleviating the need to consider interleavings of reads and writes during searches.

2 Motivating Example

1type N
2  int key
3  N left, right
4  bool del,rem
6N root$\leftarrow$new N();
8N$\times$N locate(int k)
9  x,y$\leftarrow$root
10  while (y$\neq$null  y$.\neq$k)
11    x$\leftarrow$y
12    if (x$.\leftarrow$x$.\leftarrow$x$.
13  \begin{array}{l}
14  \{\PReachXK{x}{k}  \land \PReachXK{y}{k}   \\
15    \ \ \land\ \QXK[\neq]{x}{k} \land y \neq \NULL \implies  \QXK{y}{k}\}
16  \end{array}
17  \label{Ln:RetLocate}
18  \leftarrow$locate(k)
19  if (y = null)
21    return false
23  if (y.del)
25    return false
27  return true
1bool delete(int k)
2  (_,y)locate(k)
3  if (y = null)
5    return false 
6  lock(y)
7  if (y$.\leftarrow$ y$.
8       \{\QReachXK{y}{k} \land \QXK[=]{y}{k} \land \QXR[\neg]{y} \} \label{Ln:DeleteSetDel}% \land \mathit{ret} = \neg y.\mathit{del}\}
9  .\leftarrow$true
10  return ret                
12bool insert(int k)
13  (x,y)locate(k)
15  if (y$\neq$null)
17    lock(y)
18    if (y$.\leftarrow$ y$.
19        \{\QReachXK{y}{k}\land \QXK[=]{y}{k} \land \QXR[\neg]{y}\} \label{Ln:InsertSetDel}
20    \leftarrow$false
21    return ret 
22  lock(x)
23  if (x$..\land$ x.left$=
24    \begin{array}{l}
25    \{\QReachXK{x}{k} \land \QXR[\neg]{x}   \\
26      \ \ \land\  k < x.key \land x.\mathit{left}=\text{null} \}
27    \end{array}
28    \label{Ln:InsertInsertLeft}
29    \leftarrow$ new N(k)
30  else if (x.right$=
31    \begin{array}{l}
32    \{\QReachXK{x}{k} \land \QXR[\neg]{x}   \\
33      \ \ \land\  k > x.key \land x.\mathit{right}=\text{null} \}
34    \end{array}
35    \label{Ln:InsertInsertRight}
36    \leftarrow$ new N(k)
37  else
38    restart
39  return true
28  (z,_)  locate(*)
29  lock(z)
30  y  z.right
31  if(y=null  z.rem)
32    return
33  lock(y)
34  if (y.del)
35    return
36  if (y.left$=\label{Ln:RemoveRight} \leftarrow$ y.right
37  else if (y.right$=\label{Ln:RemoveLeft} \label{cft-code:remove-bypass}\leftarrow$ y.left
38  else
39    return
40  y.rem  true
43  (p,_)  locate(*)
44  lock(p)
45  y  p$.\lor$ p.rem)
46    return
47  lock(y)
48  x  y$.\leftarrow$ duplicate(y)
49  z.left  x.right   
50  x.right  z        
51  p.left  x           
52  y.rem  true       
Figure 1: Running example. For brevity, unlock operations are omitted; a procedure releases all the locks it acquired when it terminates or restarts. denotes an arbitrary key.

As a motivating example we consider a self-balancing binary search tree with optimistic, read-only searches. This is an example of a concurrent data structure for which it is challenging to prove linearizability “from scratch.” The algorithm is based on the contention-friendly (CF) tree [12, 14]. It is a fine-grained lock-based implementation of a set object with the standard , , and ( operations. inlineinlinetodo: inlineYF: I think we can do without footnote (although I asked about it…): For simplicity, we assume that keys are natural numbers. The algorithm maintains an internal binary tree that stores a key in every node. Similarly to the lazy list [24], the algorithm distinguishes between the logical deletion of a key, which removes it from the set represented by the tree, and the physical removal that unlinks the node containing the key from the tree.

We use this algorithm as a running example to illustrate how our framework allows to lift sequential reasoning into assertions about concurrent executions, which are in turn used to prove linearizability. In this section, we present the algorithm and explain the linearizability proof based on the assertions, highlighting the significant role of local view arguments in the proof.

Fig. 1 shows the code of the algorithm. (The code is annotated with assertions written inside curly braces, which the reader should ignore for now; we explain them in Sec. 2.1.) Nodes contain two boolean fields, and , which indicate whether the node is logically deleted and physically removed, respectively. Modifications of a node in the tree are synchronized with the node’s lock. Every operation starts with a call to , which performs a standard binary tree search—without acquiring any locks—to locate the node with the target key . This method returns the last link it traverses, . Thus, if is found, ; if is not found, and is the node that would be ’s parent if were inserted. A logically deletes after verifying that remained linked to the tree after its lock was acquired. An either revives a logically deleted node or, if was not found, links a new node to the tree. A returns true if it locates a node with key that is not logically deleted, and false otherwise.

Physical removal of nodes and balancing of the tree’s height are performed using auxiliary methods.111The reader should assume that these methods can be invoked at any time; the details of when the algorithm decides to invoke them are not material for correctness. For example, in [12, 14], these methods are invoked by a dedicated restructuring thread.

(a) Right rotation of . (The bold green link is the one written in each step. The node with a dashed border has its bit set.)
(b) Node is added after the right rotation of , when is no longer in the tree.
Figure 2: A right rotation, and how it can lead a search to observe an inconsistent state of the tree. If is added after the rotation, a search for that starts before the rotation and pauses at during the rotation will traverse the path , although and never exist simultaneously in the tree.

The algorithm physically removes only nodes with at most one child. The method unlinks such a node that is a right child, and sets its field to notify threads that have reached the node of its removal. (We omit the symmetric .) Balancing is done using rotations. Fig. 2 depicts the operation of , which needs to rotate node (with key ) down. (We omit the symmetric operations.) It creates a new node with the same key and bit as to take ’s place, leaving unchanged except for having its bit set. A similar technique for rotations is used in lock-free search trees [10].

Remark 1.

The example of Fig. 1 differs from the original contention-friendly tree [12, 14] in a few points. The most notable difference is that our traversals do not consult the flag, and in particular we do not need to distinguish between a left and right rotate, making the traversals’ logic simpler. Checking the flag is in fact unnecessary for obtaining linearizability, but it allows proving linearizability with a fixed linearization point, whereas proving the correctness of the algorithm without this check requires an unfixed linearization point. For our framework, the necessity to use an unfixed linearization point incurs no additional complexity. In fact, the simplicity of our proof method allowed us to spot this “optimization.” In addition, the original algorithm performs backtracking by setting pointers from child to parent when nodes are removed. Instead, we restart the operation; see Sec. 7 for a discussion of backtracking. inlineinlinetodo: inlineYF: Reference to future work sufficiently non-misleading but not too unsettling? Lastly, we fix a minor omission in the description of [14], where the field was not copied from a rotated node.

inlineinlinetodo: inlineCE: We should maybe reinforce the simplification "do not consult the flag" a little bit. Say that this was an unnecessary check, and that the simplicity of our proof method allowed us to spot this "optimization", even if it is minor. But more generally, this is a hint that an uniform and simple proof method can be beneficial even for designing concurrent data structures
SH: added something. is it ok?
inlineinlinetodo: inlineCE: What do we mean by "we do not need to distinguish between a left and right rotate". And how does this simplify the logic ? It sounds mysterious to me.

2.1 Proving Linearizability

Proving linearizability of an algorithm like ours is challenging because searches are performed with no synchronization. This means that, due to interference from concurrent updates, searches may observe an inconsistent state of the tree that has not existed at any point in time. (See Fig. 2.) In our example, while it is easy to see that in Fig. 1 constructs a search path to a node in sequential executions, what this implies for concurrent traversals is not immediately apparent. Proving properties of the traversal—in particular, that a node reached in the traversal truly lies on a search path for key —is instrumental for the linearizability proof [49, 38].

Generally, our linearizability proofs consist of two parts: (1) proving a set of assertions in the code of the concurrent data structure, and (2) a proof of linearizability based on those assertions. The most difficult part and the main focus of our paper is proving the assertions using local view arguments, discussed in Sec. 2.2. In the remaining of this section we demonstrate that having assertions about the actual state during the concurrent execution makes it a straightforward exercise to verify that the algorithm in Fig. 1 is a linearizable implementation of a set, assuming these assertions.

Consider the assertions in Fig. 1. An assertion means that holds now (i.e., in any state in which the next line of code executes). An assertion of the form means that was true at some point between the invocation of the operation and now. The assertions contain predicates about the state of locked nodes, immutable fields, and predicates of the form , which means that resides on a valid search path for key that starts at ; if this indicates that is not in the tree (because a valid search path to does not continue past a node with key ). Formally, search paths between objects (representing nodes in the tree) are defined as follows:

One can prove linearizability from these assertions by, for example, using an abstraction function that maps a concrete memory state222We use standard modeling of the memory state (the heap) as a function from locations to values; see Sec. 3. of the tree, , to the abstract set represented by this state, and showing that , , and manipulate this abstraction according to their specification. We define to map to the set of keys of the nodes that are on a valid search path for their key and are not logically deleted in : inlineinlinetodo: inlineYF: Why are keys natural numbers? inlineinlinetodo: inlineYF: Type of the domain of the function inlineinlinetodo: inlineCE: I added footnotes 2 and 3 to makes these things precise

inlineinlinetodo: inlineTODO: Reviewer 1: using as a notation for state implies that the state is given by a history. Is that the intent?

The assertions almost immediately imply that for every operation invocation , there exists a state during ’s execution for which the abstract state agrees with ’s return value, and so can be linearized at . We need only make the following observations. First, and a failed or do not modify the memory, and so can be linearized at the point in time in which the assertions before their statements hold. Second, in the state in which a successful (respectively, ) performs a write, the assertions on line LABEL:Ln:DeleteSetDel (respectively, lines LABEL:Ln:InsertSetDel, LABEL:Ln:InsertInsertLeft, and LABEL:Ln:InsertInsertRight) imply that (respectively, ). Therefore, these writes change the abstract set, making it agree with the operation’s return value of true. Finally, since these are the only memory modifications performed by the set operations, it only remains to verify that no write performed by an auxiliary operation in state modifies . Indeed, as an operation modifies a field of node only when it has locked, it is easy to see that for any node and key , if held before the write, then it also holds afterwards with the exception of the removed node . However, removes a deleted node, and thus does not change . Further, links (’s replacement) to the tree before unlinking , so the existence of a search path to is retained (although the actual path changes), leaving the contents of the abstract set unchanged because the bit in has the same value as in .

2.2 Proving the Assertions

To complete the linearizability proof, it remains to prove the validity of the assertions in concurrent executions. The most challenging assertions to prove are those concerning properties of unsynchronized traversals, which we target in this paper. In Sec. 3 we present our framework, which allows to deduce assertions of the form of at the end of (concurrent) traversals by considering only interference-free executions. We apply our framework to establish the assertions and in LABEL:Ln:RetLocate. In fact, our framework allows to deduce slightly stronger properties, namely, of the form , where is a property of a single field of (see creftypecap 2). This is used to prove the assertions in fig. 1 and similarly in fig. 1. For completeness, we now show how the proof of the remaining assertions in Fig. 1 is attained, when assuming the assertions deduced by the framework. This concludes the linearizablity proof.

Reachability related assertions. In fig. 1 the fact that is true follows from LABEL:Ln:RetLocate.

The writes in and (LABEL:Ln:DeleteSetDel, LABEL:Ln:InsertInsertRight, LABEL:Ln:InsertInsertLeft and LABEL:Ln:InsertSetDel) require that a path exists now. This follows from the (known from the local view argument) and the fact that , using an invariant similar to preservation (see Sec. 3.3.2): For every location and key , if , then every write retains this unless it sets before releasing the lock on (this happens in LABEL:Ln:RemoveRight, LABEL:Ln:RemoveLeft and 1). Thus, when and lock and see that it is not marked as removed, follows from . Note that the fact that writes other than LABEL:Ln:RemoveRight, LABEL:Ln:RemoveLeft and 1 do not invalidate follows easily from their annotations.

Additional assertions. The invariant that keys are immutable justifies assertions referring to keys of objects that are read earlier, e.g. in LABEL:Ln:InsertSetDel and the rest of the assertion in fig. 1 ( is read earlier in ). The rest of the assertions can be attributed to reading a location under the protection of a lock. An example of this is the assertion that in LABEL:Ln:DeleteSetDel.

3 The Framework: Correctness of Traversals Using Local Views

In this section we present the key technical contribution of our framework, which targets proving properties of traversals. We address properties of reachability along search paths (formally defined in Sec. 3.1). Roughly speaking, our approach considers the traversal in concurrent executions as operating without interference on a local view: the thread’s potentially inconsistent picture of memory obtained by performing reads concurrently with writes by other threads. For a property of reachability along a search path, we introduce conditions under which one can deduce that holds in the actual global state of the concurrent data structure out of the fact that holds in the local view of a single thread, where the latter is established using sequential reasoning (see Sec. 3.2). This alleviates the need to reason about intermediate states of the traversal in the concurrent proof.

This section is organized as follows: We start with some preliminary definitions. Sec. 3.1 defines the abstract, general notion of search paths our framework treats. Sec. 3.2 defines the notion of a local view which is at the basis of local view arguments. Sec. 3.3 formally defines the conditions under which local view arguments hold, and states our main technical result. In Sec. 3.4 we sketch the ideas behind the proof of this result.

Programming model. A global state (state) is a mapping between memory locations (locations) and values. A value is either a natural number, a location, or . Without loss of generality, we assume that threads share access to a global state. Thus, memory locations are used to store the values of fields of objects. A concurrent execution (execution) is a sequence of states produced by an interleaving of atomic actions issued by threads. We assume that each atomic action is either a read or a write operation. (We treat synchronization actions, e.g., lock and unlock, as writes.) A read consists of a value and a location with the meaning that reads from . Similarly, a write consists of a value and a location with the meaning that sets to . We denote by the state resulting from the execution of on state .

3.1 Reachability Along Search Paths

The properties we consider are given by predicates of the form , denoting reachability of by a -search path, where is the entry point to the data structure. A -search path in state is a sequence of locations that is traversed when searching for a certain element, parametrized by , in the data structure. Reachability of an object along a -search path from is understood as the existence of a -search path between designated locations of , e.g. the key field, and .

Search paths may be defined differently in different data structures (e.g., list, tree or array). For example, -search paths in the tree of Fig. 1 consist of sequences where is the address pointed to by (meaning, the location that is the value stored in ) and , or where is the address pointed to by and . This definition of -search paths reproduces the definition of reachability along search paths from Sec. 2.1.

Our framework is oblivious to the specific definition of search paths, and only assumes the following properties of search paths (which are satisfied, for example, by the definition above):

  • If is a -search path in and satisfies for all , then is a -search path in as well, i.e., the search path depends on the values of locations in only for the locations along the sequence itself (but the last).

  • If and are both -search paths in , then so is , i.e., search paths are closed under concatenation.

  • If is a -search path in then so is for every , i.e., search paths are closed under truncation.

Remark 2.

It is simple to extend our framework to deduce properties of the form where is a property of a single field of . For example, states that the field of is true. As another example, the predicate says that the link from to is reachable. See Sec. A.3.2 for details.

3.2 Local Views and Their Properties

We now formalize the notion of local view and explain how properties of local views can be established using sequential reasoning.

Local view. Let be a sequence of read actions executed by some thread. As opposed to the global state, the local view of the reading thread refers to the inconsistent picture of the memory state that the thread obtains after issuing (concurrently with writes). Formally, the sequence of reads induces a state , which is constructed by assigning to every location which reads the last value reads in . Namely, when starts, its local view is empty, and, assuming its th read of value from location , the produced local view is . We refer to as the local view produced by (local view for short). We emphasize that while technically is a state, it is not necessarily an actual intermediate global state, and may have never existed in memory during the execution.

Sequential reasoning for establishing properties of local views. Properties of the local view , which are the starting point for applying our framework, are established using sequential reasoning. Namely, proving that a predicate such as holds in the local view at the end of the traversal amounts to proving that it holds in any sequential execution of the traversal, i.e., an execution without interference which starts at an arbitrary program state. This is because the concurrent traversal constructing the local view can be understood as a sequential execution that starts with the local view as the program state.

In the running example, straightforward sequential reasoning shows that indeed holds at LABEL:Ln:RetLocate in sequential executions of (i.e., executions without interference), no matter at which program state the execution starts. This ensures that it holds, in particular, in the local view.

inlineinlinetodo: inlineCE: Can we give an example ? Maybe on Fig. 2

3.3 Local View Argument: Conditions & Guarantees

inlineinlinetodo: inline

YF: Probably need to use some word other than “framework”, it really hurts the eye…

The main theorem underlying our framework bridges the discrepancy between the local view of a thread as it performs a sequence of read actions, and the actual global state during the traversal.

In the sequel, we fix a sequence of read actions executed by some thread, and denote the sequence of write actions executed concurrently with by . We denote the global state when starts its execution by , and the intermediate global states obtained after each prefix of these writes in by .

Using the above terminology, our framework devises conditions for showing for a reachability property that if holds, then there exists such that holds, which means that holds in the actual global state reached at the end of the traversal. We formalize these conditions below.

3.3.1 Condition I: Temporal Acyclicity

The first requirement of our framework concerns the order on the memory locations representing the data structure, according to which readers perform their traversals. We require that writers maintain this order acyclic across intermediate states of the execution. For example, when the order is based on following pointers in the heap, then, if it is possible to reach location from location by following a path in which every pointer was present at some point in time (not necessarily the same point), then it is not possible to reach from in the same manner. This requirement is needed in order to ensure that the order is robust even from the perspective of a concurrent reading operation, whose local view is obtained from a fusion of fractions of states.

We begin formalizing this requirement with the notion of search order on memory.

Search order. The acyclicity requirement is based on a mapping from a state to a partial order that induces on memory locations, denoted , that captures the order in which operations read the different memory locations. Formally, is a search order: [Search order] is a search order if it satisfies the following conditions:

  1. It is locally determined: if is an immediate successor of in , then for every such that it holds that . inlineinlinetodo: inlineTODO: CE: : I would replace “locally-determined” with “value-determined”. I have the impression that this condition is not about locality, but about saying that the order is determined by values stored in locations.

  2. Search paths follow the order: if there is a -search path between and in , then .

  3. Readers follow the order: reads in always read a location further in the order in the current global state. Namely, if is the last location read, the next read reads a location from the state such that .

Note that the locality of the order is helpful for the ability of readers to follow the order: the next location can be known to come forward in the order solely from the last value the thread reads.

In the example of Fig. 1, the order is defined by following pointers from parent to children, i.e., all the fields of and are ordered after the fields of , and the fields of an object are ordered by . It is easy to see that this is a search order. Locality follows immediately, and so does the property that search paths follow the order. The fact that the read-in-order property holds for all the methods in Fig. 1 follows from a very simple syntactic analysis, e.g., in the case of , children are always read after their parents and the field key is always accessed before left or right.

Remark 3.

Different search orders may be used for different traversals and different ’s when establishing at the end of the traversal. In Sec. 3.3.1, condition (iii) considers (just) the reads performed by the traversal of interest, and condition (ii) considers the possible search paths it constructs in the local view (just) for the of interest.

Accumulated order and acyclicity. The accumulated order captures the order as it may be observed by concurrent traversals across different intermediate states. Formally, we define the accumulated order w.r.t. a sequence of writes , denoted , as the transitive closure of . In our example, the accumulated order consists of all parent-children links created during an execution. We require:

[Acyclicity] We say that satisfies acyclicity of accumulated order w.r.t. a sequence of writes if the accumulated order is a partial order.

In our running example, acyclicity holds because , , and modify the pointers from a node only to point to new nodes, or to nodes that have already been reachable from that node. Modifications to other fields have no effect on the order. Note that does not perform the rotation in place, but allocates a new object. Therefore, the accumulated order, which consists of all parent-children links created during an execution, is acyclic, and hence remains a partial order.

3.3.2 Condition II: Preservation of Search Paths

The second requirement of our framework is that for every write action which happens concurrently with the sequence of reads and modifies location , if was -reachable (i.e., was true) at some point in time after started and before occurred, then it also holds right before is performed. We note that this must hold in the presence of all possible interferences, including writes that operate on behalf of other keys (e.g. ). Formally, we require:

[Preservation] We say that ensures preservation of -reachability by search paths if for every , if for some , then . Note that iff since the search path to is not affected by (by the basic properties of , see Sec. 3.1).

In our running example, preservation holds because either modifies a location that has never been reachable (such as fig. 1), in which case preservation holds vacuously, or holds the lock on when (without modifying its predecessor earlier under this lock).333 In fig. 1, because is a child of which is a child of and , it follows that because a node marked with loses its single parent beforehand. In the latter case preservation holds because every previous write retains unchanged unless it sets the field of to true before releasing the lock on . Therefore, is retained still when is performed. Preservation follows.

inlineinlinetodo: inlineYF: Trying to answer Reviewer 1: In particular, I’m not sure I fully understand the second condition, “preservation”. Why does it need the k-reachability to hold *before* the write is performed? I expected something along the lines of every write preserves all k-paths. Example 7 doesn’t really help because the example seems to be satisfying a stronger property.

We emphasize that the preservation condition only requires that -reachability is retained to modified locations and only at the point of time when the write to is performed; -reachability may be lost at later points in time. In particular, locations whose reachability has been reduced may be accessed, as long as they are not modified after the reachability loss. For example, consider a rotation as in Fig. 2. The rotation breaks the -reachability of : holds before the rotation but not afterwards. Indeed, our framework does not establish , but infers , which does hold. In this example, the preservation condition requires that the left and right pointers of are not modified after this rotation is performed.444Modification of is allowed because this field does not affect search paths (see Sec. 3.1). On the other hand, concurrent traversals may access . In the example, this happens when (1) the traversal continues beyond in the search for , and when (2) the traversal searches for and terminates in .

3.3.3 Local View Arguments’ Guarantee

We are now ready to formalize our main theorem, relating reachability in the local view (Sec. 3.2) to reachability in the global state, provided that the conditions from Sec. 3.3.2 and 3 are satisfied.

If (i) is a search order satisfying the accumulated acyclicity property w.r.t. , and (ii) ensures preservation of -reachability by search paths, then for every and location , if holds, then there exists s.t.  holds.

In Appendix B we illustrate how violating these conditions could lead to incorrectness of traversals. Sec. 3.4 discusses the main ideas behind the proof.

3.4 Proof Idea

We now sketch the correctness proof of Sec. 3.3.3. (The full details appear in Appendix A.) The theorem transfers from the local view to the global state. Recall that the local view is a fusion of the fractions of states observed by the thread at different times. To relate the two, we study the local view from the lens of a fabricated state: a state resulting from a subsequence of the interfering writes, which includes the observed local view. We exploit the cooperation between the readers and the writers that is guaranteed by the order (which readers and writers maintain) to construct a fabricated state which is closely related to the global state, in the sense that it simulates the global state (Sec. 3.4); simulation depends both on the acyclicity requirement and on the preservation requirement (Sec. 3.4). Deducing the existence of a search path in an intermediate global state out of its existence in the local view is a corollary of this connection (Sec. 3.4).

Fabricated state. The fabricated state provides a means of analyzing the local view and its relation to the global (true) state. A fabricated state is a state consistent with the local view (i.e. it agrees with the value of every location present in the local view) that is constructed by a subsequence of the writes . One possible choice for is the subsequence of writes whose effect was observed by (i.e.  read-from). For relating the local view to the global state, which is constructed from the entire , it is beneficiary to include in additional writes except for those directly observed by . In what follows, we choose the subsequence so that the fabricated state satisfies a consistency property of forward-agreement with the global state. This means that although not all writes are included in (as the thread misses some), the writes that are included have the same picture of the “continuation” of the data structure as it really was in the global state.

Construction of fabricated state based on order. Our construction of the fabricated state includes in all the writes that occurred backward in time and wrote to locations forward in the order than the current location read, for every location read. (In particular, it includes all the writes that reads from directly). Formally, let denote the location modified by write . Then for every read in that reads location from global state , we include in all the writes (ordered as in ). We use the notation for intermediate fabricated states. This choice of ensures forward-agreement between the fabricated state and the global state: every write in , the states on which it is applied, and agree on all locations such that .

In what follows, we fix the fabricated state to be the state resulting at the end of this particular choice of . It satisfies forward-agreement by construction, and is an extension of the local view, relying on the acyclicity requirement.

Simulation. As we show next, the construction of ensures that the effect of every write in on is guaranteed to concur with its effect on the real state with respect to changing from false to true. We refer to this property as simulation.

[Simulation] For a predicate , we say that the subsequence of writes -simulates the sequence if for every , if but , then .

Simulation implies that the write in that changed to true on the local view, would also change it on the corresponding global state (unless it was already true in ). This provides us with the desired global state where holds. Using also the fact that is upward-absolute [45] (namely, preserved under extensions of the state), we obtain: Let be the subsequence of defined above. If holds and -simulates , then there exists some s.t. .

Finally, we show that the fabricated state satisfies the simulation property. Owing to the specific construction of , the proof needs to relate the effect of writes on states which have a rather strong similarity: they agree on the contents of locations which come forward of the modified location. Preservation complements this by guaranteeing the existence of a path to the modified location: If satisfies preservation of for all , then -simulates for all . To prove the lemma, we show that preservation, together with forward agreement, implies the simulation property, which in turn implies that (see Sec. 3.4). To show simulation, consider a write that creates a -search path to in . We construct such a path in the corresponding global state. The idea is to divide to two parts: the prefix until , and the rest of the path. Relying on forward agreement, the latter is exactly the same in the corresponding global state, and preservation lets us prove that there is also an appropriate prefix: necessarily there has been a -search path to in the fabricated state before , so by induction, exploiting the fact that simulation up to implies that , there has been a -search path to in some intermediate global state that occurred earlier than the time of . Since writes to , the preservation property ensures that there is a -search path to in the global state also at the time of the write , and the claim follows.

4 Putting It All Together: Proving Linearizability Using Local Views

Recall that our overarching objective in developing the local view argument (Sec. 3) is to prove the correctness of assertions used in linearizability proofs (e.g., in Sec. 2.1). We now summarize the steps in the proof of the assertions. Overall, it is composed of the following steps:

  1. Establishing properties of traversals on the local view using sequential reasoning,

  2. Establishing the acyclicity and preservation conditions by simple concurrent reasoning, and

  3. Proving the assertions when relying on local view arguments, augmented with some concurrent reasoning.

For the running example, step 1 is presented in Sec. 3.2, and step 2 consists of Sec. 3.3.2 and 3.3.1 (see Appendix C for a full formal treatment). Step 3 concludes the proof as discussed in Sec. 2.2.

Remark 4.

While the local view argument, relying in particular on step 2, was developed to simplify the proofs of the assertions in 3, this goes also in the other direction. Namely, the concurrent reasoning required for proving the conditions of the framework (e.g., preservation) inlineinlinetodo: inlineYF: Omitted (I guess it refers to rely-guarantee, but its too subtle, and it’s standard, no?): as well as the assertions can be greatly simplified by relying on the correctness of the assertions (as they constrain possible interfering writes). Indeed, the proofs may mutually rely on each other. This is justified by a proof by induction: we prove that the current write satisfies the condition in the assertion, assuming that all previous writes did. This is also allowed in proofs of the conditions in Sec. 3.3, because they refer to the effect of interfering writes, that are known to conform to their respective assertions from the induction hypothesis. Hence, carrying these proofs together avoids circular reasoning and ensures validity of the proof.

5 Additional Case Studies

5.1 Lazy and Optimistic Lists

We successfully applied our framework to prove the linearizability of sorted-list-based concurrent set implementations with unsynchronized reads. Our framework is capable of verifying various versions of the algorithm in which and validate that the nodes they locked are reachable using a boolean field, as done in the lazy list algorithm [24], or by rescanning the list, as done in the optimistic list algorithm [28, Chap 9.8]. Our framework is also applicable for verifying implementations of the lazy list algorithm in which the logical deletion and the physical removal are done by the same operation or by different ones. We give a taste of these proofs here.

Fig. 3 shows an annotated pseudo-code of the lazy list algorithm. Every operation starts with a call to , which performs a standard search in a sorted list—without acquiring any locks—to locate the node with the target key . This method returns the last link it traverses, . Fig. 3 includes two variants of : In one variant, it returns only if it finds a node with key that is not logically deleted (fig. 3), while in the second variant it returns even if that node is logically deleted (the commented at fig. 3). Interestingly, the same annotations allow to verify both variants, and the proof differs only in the abstraction function mapping states of the list to abstract sets. Modifications of a node in the list are synchronized with the node’s lock. An operation calls , and then links a new node to the list if was not found. logically deletes (after validating that remained linked to the list after its lock was acquired), and then physically removes it.

As in Sec. 2, the assertions contain predicates of the form , which means that resides on a valid search path for key that starts at ; the formal definition of a search path in the lazy list appears below. Note that indicates that is not in the list.

We prove the linearizability of the algorithm using an abstraction function. One abstraction function we may use maps to the set of keys of the nodes that are on a valid search path for their key and are not logically deleted in :

39type N
40  int key
41  N next
42  bool mark
44N root$\leftarrow$new N();
46N$\times$N locate(int k)
47  x,y$\leftarrow$root
48  while (y$\neq$null  y$.<\leftarrow$y
49    y$\leftarrow$x$.
50  \{\Past{\QReachXK{x}{k} \land x.\inext=y }\}   \label{LnLazy:RetLocate}
52  \{\QXK[<]{x}{k} \land (y \neq \NULL \implies  \QXK[\geq]{y}{k})\}  \label{LnLazy:RetLocateKeys}
53  \leftarrow$locate(k)
54  if (y$\neq$null  y$.=
55      \{\PReachXK{y}{k} \land \QXK{y}{k}\}
56    .\lor$ x$.\neq$y)
57    restart
59  z$\leftarrow$new N(k)
61  z$.\leftarrow$y 
63  x$.\leftarrow$z  
64  return true
52bool contains(int k)
53  (_,y)locate(k)
54  if (y$=
55      \{\PReachXK{\NULL}{k}\}     \label{LnLazy:ContainsNull}
56    \neq$k)
58    return false
59  if (y.mark)
61    return true 
63  return false // return true 
65bool delete(int k)
66  (x,y)locate(k)
67  if (y$=
68      \{\PReachXK{\NULL}{k}\}
69    \neq$k)
71    return false
73  lock(x)
74  lock(y)
75  if (x$.\lor$ y$.\lor$ x$.\neq$y)
76    restart
78  y$.\leftarrow$true                   
80  x$.\leftarrow$y$.\label{LnLazy:DeleteRemove}\label{LnLazy:DeleteRetTrue}
Figure 3: Lazy List [24]. The code is annotated with assertions written inside curly braces. For brevity, unlock operations are omitted; a procedure releases all the locks it acquired when it terminates or restarts.

Another possibility is to define the abstract set to be the keys of all the reachable nodes:

We note that can be used to verify the code of as written, while allows to change the algorithm to return in fig. 3. In both cases, the proof of linearizability is carried out using the same assertions currently annotating the code. In the rest of this section, we discuss the verification of the code in Fig. 3 as written, and thus use as the abstraction function. The assertions almost immediately imply that for every operation invocation , there exists a state during ’s execution for which the abstract state agrees with ’s return value, and so can be linearized at ; we need only make the following observations. First, and a failed or do not modify the memory, and so can be linearized at the point in time in which the assertions before their statements hold. Second, in the state in which a successful (respectively, ) performs a write, the assertions on line 3 (respectively, fig. 3) imply that (respectively, ). Therefore, these writes change the abstract set, making it agree with the operation’s return value of true. Finally, it only remains to verify that the physical removal performed by in state does not modify . Indeed, as an operation modifies a field of node only when it has locked, it is easy to see that for any node and key , if held before the write, then it also holds afterwards with the exception of the removed node . However, removes a deleted node, and thus does not change .

The proof of the assertions in Fig. 3 utilizes a local view argument for the assertion in LABEL:LnLazy:RetLocate for the predicate , using the extension with a single field discussed in creftypecap 2. The conditions of the local view argument are easy to prove: The acyclicity requirement is evident, as writes modify the pointers from a node only to point to new nodes, or to nodes that have already been reachable from that node. Preservation holds because a write either (i) marks a node, which does not affect the search paths; (ii) modifies a location that has never been reachable (such as fig. 3), in which case preservation holds vacuously; (iii) removes a marked node (LABEL:LnLazy:DeleteRemove) which removes all the search paths that go through it. However, as is marked, its fields are not going to be modified later on, and thus cannot be the cause of violating preservation. Furth