A Simple Way to Verify Linearizability of Concurrent Stacks

10/12/2021
by   Tangliu Wen, et al.
NetEase, Inc
0

Linearizability is a commonly accepted correctness criterion for concurrent data structures. However, verifying linearizability of highly concurrent data structures is still a challenging task. In this paper, we present a simple and complete proof technique for verifying linearizability of concurrent stacks. Our proof technique reduces linearizability of concurrent stacks to establishing a set of conditions. These conditions are based on the happened-before order of operations, intuitively express the LIFO semantics and can be proved by simple arguments. Designers of concurrent data structures can easily and quickly learn to use the proof technique. We have successfully applied the method to several challenging concurrent stacks: the TS stack, the HSY stack, and the FA stack, etc.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/21/2018

Proving Linearizability Using Reduction

Lipton's reduction theory provides an intuitive and simple way for deduc...
09/12/2021

Verifying Concurrent Multicopy Search Structures

Multicopy search structures such as log-structured merge (LSM) trees are...
09/19/2019

Proof Pearl: Magic Wand as Frame

Separation logic adds two connectives to assertion languages: separating...
10/25/2018

Decoupling Lock-Free Data Structures from Memory Reclamation for Static Analysis

Verification of concurrent data structures is one of the most challengin...
05/15/2019

Quantifiability: Concurrent Correctness from First Principles

Architectural imperatives due to the slowing of Moore's Law, the broad a...
04/17/2020

Defining and Verifying Durable Opacity: Correctness for Persistent Software Transactional Memory

Non-volatile memory (NVM), aka persistent memory, is a new paradigm for ...
03/28/2022

Certified Mergeable Replicated Data Types

Replicated data types (RDTs) are data structures that permit concurrent ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Linearizability[1-3] is a commonly accepted correctness criterion for concurrent data structures. Intuitively, linearizability requires that (1) each concurrent execution of a concurrent data structure is equivalent to a legal sequential execution and (2) the sequential execution preserves the order of non-overlapping operations ( called the happened-before order, for two operations , precedes , denoted by , if returns before begins to execute). To achieve high performance, concurrent data structures often employ sophisticated fine-grained synchronization techniques[4-8]. This makes it more difficult to verify the linearizability of concurrent data structures.

Henzinger et al.[9] propose a simple method to the linearizability verification of concurrent queues, which reduces the problem of verifying linearizability of concurrent queues to checking four simple properties. The key one of these properties is based on the happened-before order of operations: If for two non-overlapping enqueue operations and in a concurrent execution, precedes , then the value inserted by cannot be removed earlier than the one inserted by , i.e., cannot precede , where removed the value inserted by . Their method is not be necessary to find the entire linearization order to show that an execution is linearizable. To the best of our knowledge, they do not extend the method to concurrent stacks. An important reason for this is that concurrent stacks have no similar properties. For instance, for two non-overlapping push operations, either of the two values pushed by the two push operations is possible to be popped first.

Intuitively, a pop operation must return the value pushed by the latest operation among the push operations whose effects can be observed by the pop operation. If a push operation precedes a pop operation in a concurrent execution, then the pop operation can observe the effect of the push operation. However, if a pop operation is interleaved with a push operation, the pop operation may observe the effect of the push operation or not. For example, consider the following execution of a stack shown in Figure 1. The pop operation can return either x or y. This depends on whether the pop operation observes the effect of the operation push(y). Whichever the pop operation returns, the execution is always linearizable.

Figure 1. which value does the pop operation return?

If a push operation is interleaved with a pop operation and the pop operation returns the value pushed by the push operation, then we call them an elimination pair. In this paper, we prove that the elimination pairs do not violate linearizability of concurrent stacks, i.e., for a history of a concurrent stack, let be a subsequence of obtained by deleting the elimination pairs of ; if is linearizable, then is also linearizable. This enables verifiers to ignore the elimination pairs and focus on the “common” operations when they verify the linearization of concurrent stacks.

Any “common” pop operation pops the value pushed by the push operation which precedes the pop operation. Thus these pop operations do not exist the above nondeterministic problem. In this paper, we present a stack theorem for verifying concurrent stacks, this theorem presents a set of conditions based on the happened-before order which are sufficient and necessary to establish linearization of the “common” operations. Because of avoiding the above nondeterminism, the theorem intuitively characterizes the “Last in First Out” (“LIFO”) semantics. Informally, the stack theorem says: a concurrent execution of a stack is linearizable iff there exists a linearization of the pop operations, such that the pop operations always remove the latest values in the stack in the linear order of pop operations.

Although our proof technique requires a linear order of pop operations, we are surprised to discover (1) the above linearization of pop operations is probably difference from the final linearization of the pops operations which is extracted from the linearization of the execution. (2) for all concurrent stacks we have met, the atomic write actions of pop methods which logically or physically remove the values of stacks can be chosen as their linearization points to construct the initial linearization. Thus it is not a difficult task to construct the initial linearization of pop operations. The conditions stated in the stack theorem can be easily verified by the properties based on happened-before order of operations, verifiers do not need to know other proof techniques, and can easily and quickly learn to use the proof technique. We have successfully applied the method to several challenging concurrent stacks: the TS stack, the HSY stack, the FA stack, etc.

2 Linearizability

A history of a concurrent data structure is a sequence of invocation and response events[1,9]. We refer to a method call as an operation. An invocation event represents a method with an argument value which is performed by a thread and is identified by an operation identifier . A response event represents an operation with a return value . An invocation event matches a response event if they are associated with the same operation. A history is sequential if every invocation event, except possibly the last, is immediately followed by its matching response event. A sequential history of a concurrent data structure is legal if it satisfies the sequential specification of the concurrent data structure.

A history is complete if every invocation event has a matching response event. An invocation event is pending in a history if there is no matching response event to it. For an incomplete history , a completion of , is a complete history gained by adding some matching response events to the end of and removing some pending invocation events within . Let be the set of all completions of the history . For a history and a thread , let denote the maximal subsequence of consisting of the events performed by the thread . Let denote the happened-before order of operations in the history ; for any two operations and of , if the response event of precedes the invocation event of . We say the operation precedes the operation in the history if . For any two operations and of , is interleaved with , i.e., and , denoted by . We omit the subscripts when the histories are clear from the context.

A history is linearizable with respect to a sequential specification[1,11] if there exists and a legal sequential history such that ; if for any two operations , , then . is called a linearization of . A concurrent data structure is linearizable with respect to its sequential specification if every history of the concurrent data structure is linearizable with respect to the sequential specification.

Generally, the standard sequential “LIFO” stack is used to characterize the sequential specification of a concurrent stack. For a linearizable stack with respect to the standard specification, we sometimes omit the standard specification for simplicity.

In this paper, we only consider complete histories. As Henzinger et al. [9] have shown, a purely-blocking data structure is linearizable iff every complete history of the concurrent data structure is linearizable. The notion of purely blocking is a very weak liveness property, and most of the concurrent data structures satisfy the liveness property. In particular, all concurrent data structures verified in this paper are purely blocking.

3 Partially Ordered Sets

A strict partial order on a set is an irreflexive, antisymmetric, and transitive relation. Obviously, the happened-before order of operations is a strict partial order on the set of operations. An element in the set with a strict partial order is called maximal if , and is called minimal if . We say is bigger than with respect to a strict partial order if . Let and be two partial orders on a set . The partial order is called an extension of partial order if, whenever , then .

Let be a strict partial order on the set , let be a sequence which preserves the partial order (i.e., ). Then for any element , can be inserted into the sequence such that it still preserves the partial order .

Obviously, the following algorithm does the job and it also is used in the proofs of Lemma 4.1 and Theorem 5.1.

if  then
      is inserted to the right of ;
            ;
      else if  then
           is inserted between and ;
                 ;
          
           else if  then
                is inserted between and ;
                else
                     is inserted into the left of ;
Algorithm 1 a linear extension

4 Elimination Mechanism

Elimination is an essential optimization technique for concurrent stacks[12-14]. For example, both the HSY stack and the TS stack apply the parallelization optimization mechanism. The elimination mechanism is based on the fact that if a push operation followed by a pop operation is performed on a stack, the stack’s state remains unchanged. In a concurrent execution, if a push operation is interleaved with a pop operation and the pop operation returns the value inserted by the push operation, then they are called an elimination pair. In order to reduce frequency of shared-data accesses and increase the degree of parallelism of stack, the elimination mechanism allows the elimination pairs to exchange their values without accessing the shared stack structure.

We now prove that the elimination mechanism does not violate linearizability of concurrent stacks. In other words, we can ignore the elimination pairs when proving linearizability of concurrent stacks.

For a history of a concurrent stack, let be a subsequence of obtained by deleting an elimination pair of . If is linearizable with respect to the standard “LIFO” specification, then is also linearizable with respect to the specification.

Proof.

Let denote the push operation with an input parameter . Let denote the pop operation with a return value . Assume that and are an elimination pair of . is obtained from by deleting the elimination pair. is a linearization of . Consider two cases: In the first case, begins to execute earlier than ,as shown in Figure 2; In the second case , begins to execute earlier than . In the following, we prove that the lemma holds in the first case. The proof for the second case is similar.
In the first case, there are the following properties:

  • Property 1. .

  • Property 2. .

  • Property 3. .

Figure 2. begins to execute earlier than

Using Algorithm 1, we insert into . If there exist operations which precede in , then the left operation of precedes . Assume the left operation is .

Since , the operation must either be interleaved with , or precede . the former is shown in Figure 3. In either case, there is the following properties:

  • Property 4. .

  • Property 5. .

Figure 3. the operation is interleaved with

After inserting , we insert between and , as shown below.

By Property 1 and Property 2, any operation on the right of does not precede . By Property 3, does not precede . By Property 4 and Property 5, does not precede any operation on the left of .

Thus, after inserting , the new sequence preserves the happened-before order. Obviously, the new sequence satisfies the “LIFO” semantics. Thus, the new sequence is a linearization of .

If there is no any operation which precedes in , by Algorithm 1, we insert in the front of . Then we insert into the left of , as shown below.

By Property 1 and Property 2, the final sequence preserves the happened-before order. Obviously, the sequence is a linearization of .

By the above lemma, we can get the following theorem. For a history of a concurrent stack, let be a subsequence of obtained by deleting all elimination pairs of . If is linearizable with respect to the standard “LIFO” specification, then is also linearizable with respect to the specification.

5 Conditions for Stack Linearizability

For a history of a stack, let and denote the sets of all push and pop operations in , respectively. We map each pop operation of the history to the push operation whose value is removed by the pop operation, or to if the pop operation returns empty. A safety mapping requires that (1) a pop operation always returns the value which is inserted by a push operation or empty; (2) the value which is inserted by a push operation is removed by a pop operation at most once. We formalize the notion as follows:

For a complete history , a mapping Match from to and is safe if

  1. if , , then the value returned by the operation pop is inserted by the operation .

  2. if , , then the pop operation returns empty.

  3. if , then .

The conditions stated in the following theorem characterize the “LIFO” semantics of concurrent stacks. Let be a subsequence of a concurrent stack’s history obtained by deleting elimination pairs of the history. is linearizable with respect to the standard “LIFO” specification iff there exists a linearization of all pop operations in (i.e., ), and a safety mapping Match such that:

  1. If , let , then and ;

  2. If , then (a) ; (b)let , .

Informally, the first condition requires that each non-empty pop operation always pops the value pushed by the latest push operation among the push operations which precede the pop operation and are not mapped to its previous pop operations (with respect to the linear order of pop operations). In other words, the pop operations always remove the latest values in the stack in the linear order of pop operations. The second condition requires that if a pop operation returns an empty value, then for any push operation which precedes the pop operation, the values pushed by them are removed by the previous pop operations of the pop operation; for any push operation which is interleaved with the pop operation and is not mapped to the previous pop operation of the pop operation, the push operation does not precede the push operations mapped to the previous pop operations of the pop operation.

Proof.

() We first prove that the theorem holds when does not contain the pop operations returning empty, then further extend the result to the case where contains this kind of pop operations.

1. is linearizable when does not contain the pop operations returning empty.

The proof is done in two stages. Firstly, we construct a linearization of push operations. Secondly, we insert , one after another, into the linearization sequence.

Step 1: Construct a linearization of push operations.

Assume the number of push operations in is m. We construct a linearization of push operations by the following rules: First, choose the first element from the maximal push operations with respect to the partial order such that its matching pop operation is the smallest with respect to the linear order of pop operations among the matching pop operations of the maximal push operations. Let denote the chosen element at the step. Formally, let , , , wheredenotes the linear order of the pop operations. If the maximal elements all have no matching pop operations, choose any one from them. Second, delete from ; similar to the rule of the first step, choose an element from the rest of ; and insert it before (i.e., ). Third, continue to construct the sequence in the same way until all push operations of are chosen.

By construction, the final sequence is . The sequence is a linearization of all push operations, and have the following properties: For any push operation , is a maximal element (w.r.t. ) among the left elements of , and if has a matching pop operation,its matching pop operation is the smallest (w.r.t. ) among the matching pop operations of the maximal push operations.

Step 2: We insert , one after another, into the sequence .

Step 2.1: Using Algorithm 1, we insert into the linearization sequence of push operations. In terms of the property of Algorithm1, we can get that (1) the left push operation of precedes . (2) after the insert operation, the new sequence preserves the order . Let denote the left element of . According to and the first condition of the theorem, .

Step 2.2: Similar to , for any , , insert it into the new sequence.

Step 2.2.1: Using Algorithm 1 and ignoring the previous pop operations, we found the first push operation which precede . Let denote the push operation. If is not followed by pop operations, then insert into the right of . If is followed by pop operations, then insert into the end of the pop operations.

After the above inserting operation, if is not between any previous pop operation and its matching push operation, then there exists the following property:

(1) the new sequence satisfies the “FILO” semantic (we can get this by and the first condition of the theorem), and (2) the new sequence preserves the order. The reasons for (2) are as follows: According to Algorithm 1, and any push operation in the sequence do not violate the happened-before order. According to the order of inserting pop operations, and any pop operation ahead it do not violate the happened-before order. Since the new sequence satisfies the “FILO” semantics,for the pop operations behind (if any),their responding push operations are also behind . Thus the push operations do not precede . Assume that the pop operations behind precede , then we can get that precedes their matching push operations (because the matching push operations precede the pop operations behind ). This contradicts the above fact. Thus and any pop operation behind it do not violate the happened-before order.

Step 2.2.2: If is between by some previous pop operations and their matching push operations,(assume that is the last one among the operations) then move to the right of ,as follows.

and are the sequences before and after the move transforming,respectively. Before the move transforming, there exists the following property.

: In the , does not precede the push operations between and . The reasons for this are as follows: Before inserting , the sequence satisfies the “FILO” semantics. For the push operations between and , their matching pop operations are in the . If there exists a push operation in the push operations such that , then . This contradicts the linearization order of the pop operations.

: After the move transforming, the new sequence preserves the happened-before order and satisfies the “FILO” semantics. The reasons for this are as follows: Before inserting , the sequence satisfies the “FILO” semantics. According to and the first condition of the theorem, the new sequence satisfies the “FILO” semantics. By and Algorithm 1, in the , and any push operation do not violate the happened-before order. Similar to the proof in the above case, we can get that in the , and any pop operation do not violate the happened-before order.

2. is also linearizable when contains the pop operations returning empty.

We construct the linearization of by the following process:

Generally, if , let denote the linearization of ,…, and their matching push operations (constructing by the above method), let denote the linearization of the other operations of . Let .

Obviously, any two pop operations do not violate the happened-before order in ( i.e. in , for any two pop operations and , if then in ).

In the following, we show that in (1) any two push operations do not violate the happened-before order, and (2) any pop operation and any push operation do not violate the happened-before order.

Let and be a push operation and a pop operation in , respectively. Let and be a push operation and a pop operation in , respectively.

Since , and , we can get .

By the firs part of the second condition, we can get . Consider the following cases. (1) If , then obviously, and . (2)If , then by the second part of the second condition, we can get and .

() Since is linearizable, there exists a safety mapping from to and . We assume that is a linearization of . Let be the maximal subsequence of consisting of pop operations. Obviously, it is a linearization of the pop operations of . It is not difficult to show that they satisfy the two conditions of Theorem 5.1. ∎

Note that by Step 2.2.2 in the above proof, the initial linear order of pop operations in the Theorem 5.1 is probably difference from the final linear order of the pops operations which is extracted from the linearization of the execution constructed in the above proof. In section X, we will illustrate this point using an example. Next, we show that the initial linear order of pop operations can be easily constructed.

6 how to Constructing the linearizations of the pop operations

Our method requires an initial linearization of pop operations which satisfies the two conditions of Theorem 5.1. A challenge for applying our method is how to construct such linearization of pop operations. Fortunately, for all concurrent stacks we have verified, the initial linearization can be constructed in terms of the atomic write actions of the pop operations which logically or physically remove the values in the stacks, i.e., the removing actions are viewed as “linearization points”, in the initial linear orders, the pop operations is in the order of the removing actions in concurrent executions.

The physical removing action in a pop method physically removes a value in the stack and the pop method returns the value finally. The logical removing action in a pop method only fixes a value in the stack, the pop method returns the value finally. After the logical removing action, other pop operations cannot logically or physically remove the value. For instance, the statement in the FA stack[], is a logical removing action, the atomic method of the TS stack, is a physical removing action. Obviously, the initial linearization can be easily constructed in terms of the fixed “linearization points”.

Then, an important question is, for any stack, whether the (logical or physical) removing action of its pop method can be chosen as a linearization point to construct the initial linearization? Obviously, when the linearization constructed in terms of the removing actions of the pop operations is ineffective for establishing the first condition of Theorem 5, they cannot be chosen as such linearization points.

For simplicity, we consider the executions containing only two pop operations where their removing actions cannot be chosen as such linearization points. Two basic example executions are shown in Figure 4 and Figure 5.

Figure 4. begins to execution before the removing action of

In this figures, the black circles of the pop operations stand for the logical or physical removing actions. In Figure 4, begins to execution after the removing action of . The only linearization of the execution is . The linearization of the two pop operations constructed in terms of the two removing actions is . Under the linearization of the two pop operations, the first condition of Theorem 5 is not established. Thus, the two removing actions cannot be chosen as linearization points. If there is no the pop operation of Thread 4, then the pop operation of Thread 3 must remove the value pushed by , to make the execution linearizable. Thus the actions before the removing action affect the execution of , and prevent it observing effect ( or the value pushed by ). Such pop algorithms are uncommon. Generally, except for the logical or physical removing actions, the pop’s actions do not prevent the values in the stack from being removed by other pop operations. For all concurrent stacks we have met, their pop methods have such fixed linearization points. For most of concurrent stacks we have verified, the actions before the removing action either read the shared state or access (read or write) the local state, and do not affect the executions of other operations.

In Figure 5, begins to execution after the removing action of . removes the value before begins to execution. Thus, if there is no , then also remove . In this case, the execution of the three operations is not linearizable. Such pop algorithms are nonexistent.

Figure 5. begins to execution after the removing action of

7 Verifying the Time-Stamped Stack

We illustrate our proof technique on the Time-Stamped Stack (the TS stack, for short)[15]. The linearization verification of the stack is challenging, because both of its push and pop methods have no fixed linearization points(see [15] ). Figure 6 shows the pseudo code for the TS satck.

Node {
Val val;
Timestamp timestamp;
Node next; }
List {
Node top ;   int id;
Node insert(Element e);
(bool, Element) remove (Node Top, Node n );
(Node, Node)getYoungest( ); }
TSStack {
List [maxThreads] Pools; }
void Push (Val v){
List CurList:=pools[threadID];
Node node:=CurList.insert(v);
Timestamp ts:=newTimestamp();
node.timestamp:=ts; }
Val Pop (){
Timestamp startTime;
bool success;   Val v;
startTime:= newTimestamp( );
while(true){
(success, v):=tryRem(startTime);
if(success)
break;
}
return v; }
(bool,Val) tryRem (TimeStamp startT){
List CandList;   Node CandTop;
Node[maxThreads] empty;
Node CandNode = NULL;
Timestamp MaxTS = -1;
for each ( List CurList in Pools ){
Node CurNode , CurTop ;
Timestamp curTS;
(CurNode, CurTop)= CurList.getYoungest();
//Emptiness check
if (CurNode == NULL ){
empty [CurList.id ]= CurTop;
continue; }
CurTS= CurNode.timestamp;
// Elimination
if ( startT CurTS )
return CurList.remove (CurTop, CurNode );

if (MaxTS CurTS ) {
CandNode = CurNode;
MaxTS =CurTS;
CandList= CurList;
CandTop=CurTop; }
}
// recording empty list
if (CandNode == NULL ){
for each (List CurList in Pools ){
if (CurList.top empty [CurList.id ])
return (false, NULL ); }
return (true,EMPTY ); }

return CanList.remove(CandTop, CandNode );
} }  

Figrue 6. the TS stack

We use operator for timestamp comparison. For two timestamps and , is bigger than if ; and are incomparable if and . For two operations and , if the timestamp generated by is bigger than the one generated by ; Let denote the maximal value of timestamps. There are a number of implementations of the time stamping algorithm. All these implementations guarantee that (1) two sequential calls to the algorithm, the latter returns a bigger timestamp than the former and (2) two concurrent and overlapping calls to the algorithm generate two incomparable timestamps.

This stack maintains an array of singly-linked lists , one for each thread. Each node in the list contains a data value (), a timestamp (), the next pointer (). The push operation of a thread only inserts an element into its associated list. The top pointer of the list is annotated with an ABA-counter to avoid the ABA-problem.

The operations on the list are as follows:

  • - inserts a node with a value and a timestamp , to the head of the list and returns a reference to the new node.

  • - returns a reference to the node with the youngest timestamp (i.e., the head node), or if the list is .

  • - tries to remove the given node from the list. Returns and the value of the node if it succeeds, or returns and otherwise.

The method first inserts an element into its associated list (line ), then generates a timestamp (line ) and sets the timestamp field of the new node to the new timestamp (line ).

The method first generates a timestamp startTime, attempts to remove an element by calling the method .

The method traverses every list, searching for the node with the youngest timestamp to remove (line ). If the node has been found, the method tries to remove it(line ).

Elimination: During the traversing, if the method finds a node whose timestamp is bigger than the timestamp of the pop operation( line ), then the method tries to remove it (line ). In this case, the node must have been pushed during the current pop operation. Thus, the push operation must have been interleaved with the current pop operation,and can be eliminated.

Emptiness checking: During the traversing, if the method finds that a list is empty, then its top pointer is recorded in the array . After the traversing, If no candidate node for removal is found then the method check whether their top pointers have changed (lines ). If not, the lists must have been empty at line .

For a complete history of the TS stack, let map to and . A push operation always inserts a node with a value into a list; A pop method either removes a node from lists and returns the value of the node or return empty; A node is removed at most once; Thus, is a safe mapping.

For any complete history of the TS stack, let be a subsequence obtained by deleting elimination pairs of the history. is linearizable with respect to the standard “LIFO” specification.

Proof.

For a pop operation returning a non-emptpy value, we choose (a successful removing node action) as its linearization point; for a pop operation returning empty, we choose as its linearization point; We use these linearization points to construct a linearization of the pop operations. We show that the TS stack satisfies the conditions of Theorem 5.1.
1. If , let , then ;
proof. Assume that there exists a push operation such that . In this case, we can get . Because , have inserted a node into its associated list before traverses these lists. During the traversing the lists, if does not access the node inserted by , then accesses the head node with a bigger timestamp of the associated list. Thus the timestamp of the head node is also bigger than the one of . In this case, will not remove the node inserted by (Because will remove the node with the youngest timestamp). This contradicts the fact. If accesses the node inserted by , then by , will not remove the node inserted by . This also contradicts the fact.

2. If , then (a) ; (b)let , .
proof. If , all lists are empty at the time point when is executed. Thus the nodes inserted by the push operations which precede are removed by the pop operations in front of . Thus the first clause of the condition is valid. When the statement of is executed, the push operations in the set do not complete the inserting node action, however, the pop operations in front of and their matching push operations complete the actions of deleting and inserting nodes, respectively. Thus the second clause is valid.

Figure 7. Example execution illustrating how to construct a lineraization.

An example execution of the TS stack (adapted from [X]) is depicted in fig.7. We will construct a linearization of the execution history by using the method in the proving process of Theorem 5.1. Let / denote the push/pop operation which pushes /pops the value and generates the timestamp . The black circles of the pop operations in fig.x stand for the actions of removing nodes (at ).

The above execution is feasible. For example, consider the execution of Thread 5. first accesses associated list and chooses the node inserted by as a candidate. continues to access associated list before inserts a node into the list. Finally, accesses associated list after removes a node from the list. Thus, the pop operation of removes the node inserted by .

First, we construct the following linearization of push operations using the method shown in Step 1 in the proof of Theorem 5.1.

Second, we construct the following linearization of pop operations in terms of the actions of removing nodes.

Finally, we insert the pop operations , one after another, into the linearization sequence of push operations. By the algorithm 1 and , we first insert to the end of the push sequence and get the following sequence.

By the algorithm 1 and , we insert into the right of and get the following sequence.

Because and is followed by , we insert into the right of and get the final sequence.

Obviously, the final sequence does not violate the happened-before order, satisfies the “FILO” semantics and is a linearization of the above execution. Note that the final linearization of pop operations is difference from the original linearization of pop operations .

8 Verifying Other Stacks

The Treiber stack[16] is based on a singly-linked list with a top pointer, is a lock-free concurrent stack. The push and pop methods of the stack try to update the top pointer using cas instructions to finish their operations. We choose the cas instruction which successfully removes the head node as the linearization point of the pop method and construct the initial linearization of pop operations in terms of them. Obviously, the head node is inserted by the latest push operation among all nodes of the current list. Thus, each pop operation always removes the node pushed by the latest push operation in the initial order of pop operations.

The HSY stack[12] is also based on a singly-linked list. similar to the Treiber stack, the HSY stack first tries to update the top pointer by cas instructions to finish the push or pop operations. If the cas instructions fail to update the top pointer, then the HSY stack use elimination mechanism to finish the operations. By Theorem 4.1, we only need to consider the executions of the common push and pop operations (not elimination pairs). The linearizability verification of the common operations is simlar to the one of the Treiber stack.

The FA-Stack[4] is a fast array-based concurrent stack. A past path of a push operation attempts to store an element at the global array. If the past path fails in storing, the push operation switches to a slow-path. In the slow-path, the push operation publishes a push request to enlist help of pop operations. Thus the pop operations try to help it to store the element. Similar to push operations, a past path of a pop operation attempts to find an element to be popped. If the past path fails in popping or returning empty, the pop operation switches to a slow-path. In the slow-path, the pop operation publishes a pop request to enlist help and then other pop operations help it to pop an element or return an empty. For a pop operation popping an element from a fast path, we choose the successful CAS action , which logically removes an element from the cell , as a linearization point of the pop operation. For a pop operation popping an element from a slow path, we choose the successful CAS action , which reserves the cell for the pop request , as a linearization point of the pop operation. Once the cell is reserved for a pop operation, other pop operations do not remove the element from the cell. Thus, the reserving action is also a logical removing action. We show that the FA-Stack is linearizable by using the following two invariants.

Invariant 1. Assume that c[i] is the current cell visited by a pop operation (or pop helper), then the pop operation continues to visit the predecessor cell of c[i] only after ensuring that c[i] is unusable, popped by other pop operations, or reserved for other pop requests.

Invariant 2. Assume that two push operations , store their elements at the two cells c[i],c[j],respectively. If , then .

Assume that a pop operation logically removes an element from the cell . By Invariant 1, when the pop operation logically removes the element from the cell , then for all , the element from the cell have been removed logically. By Invariant 2, when the pop operation logically removes the element from the cell , the element from c[i] is inserted by the latest push operation in the current stack.

Afek et al. propose a simple array-based stack [17]. The stack is represented as an infinite array, and a marker, rang, pointing to the end of the used part of the array. To push an element, a push operation first obtains a cell index by incrementing range and then stores the element at the cell. A pop operation first reads the range, and then searches from range to the first cell to see if it contains a non-NULL element. If it finds such an element, then it removes and returns the element. Otherwise, it returns empty. For the pop operation which does not return empty, we choose the successful swap action , which returns a non-null value, as linearization point of the pop operation. We show that the concurrent stack is linearizable by using the invariants similar to the above two invariants.

9 Related Work and Conclusion

There has been a great deal of work on linearizability verification[18-25]. Mainly, there are four kinds of verification techniques: refinement-based techniques, simulation-based techniques, reduction-based techniques,program-logic-based techniques. An interested reader may refer to the survey article [25]. However, as Khyzha et al. argue [20], it remains the case that all but the simplest algorithms are difficult to verify.

To the best of our knowledge, there exist only two earlier published full proofs of the TS stack: (1) the original proof by Dodds et al.[15], and (2) a forward simulation proof by Bouajjani et al. [18].

In the original proof of the TS stack, Dodds et al. have proposed a set of conditions sufficient to ensure linearizability of concurrent stacks. In addition to the happened-before order, their conditions require an auxiliary insert-remove order which relates pushes to pops and vice versa, and two helper orders and over push operations and pop operations, respectively. However, it is difficult to show that there exists an insert-remove order that satisfies the definition of order-correctness. For applying the method directly on the TS stack, they have to construct an intermediate structure called the TS buffer. This made the linearization proof of the TS stack be complex and not be intuitive. Although our method requires a linear order of the pop operations, the linear order can be easily constructed, as discussed above. The conditions in our theorem intuitionally express that the “LIFO” semantics of a concurrent stack and lead to simple and natural correctness proofs.

Bouajjani et al. propose a forward simulation based technique for verifying linearizability. They have successfully applied the method to verify the TS stack and the HW queue. In fact, for the TS stack, there does not exist a forward simulation to the standard sequential stack and they have to construct a deterministic atomic reference implementation (as an intermediate specification) for the TS stack, and the linearizability proof is reduced to showing that the TS stack is forward-simulated by the intermediate specification. In comparison, our proof technique is simpler and more intuitive.

Conclusion We present a simple proof technique for verifying linearizability of concurrent stacks. Our technique reduce the problem of proving linearizability of concurrent stacks to establishing a set of conditions. The conditions can be easily verified just by reasoning about happened-before orders of operations, verifiers do not need to know other proof techniques, and can easily and quickly learn to use the proof technique. We have successfully applied the method to several challenging concurrent stacks: the TS stack, the HSY stack and the FA stack. Our proof technique is suitable for automation, as it requires just checking the key invariant: when a common pop operation removes a value in the stack, the value is the latest value in the current stack. In the future, we would like to build a fully automated tool for the proof technique.

10 Funding

This work was supported by the National Nature Science Foundation 1of China [61020106009, 61272075].

References

  • [1] Herlihy, M., and Wing, J. (1990) Linearizability: a correctness condition for concurrent objects. ACM Transactions on Programming Languages and Systems, 12, 463-492.
  • [2] Herlihy, M., and Shavit, N. (2008) The Art of Multiprocessor Programming. Morgan Kaufmann, Massachusetts.
  • [3] I. Filipović, P. W. O’Hearn, N. Rinetzky, and H. Yang. Abstraction for concurrent objects. Theor. Comput. Sci., 411(51-52):4379-4398, 2010.
  • [4] Peng, Yaqiong, and Zhiyu Hao. Fa-stack: A fast array-based stack with wait-free progress guarantee, IEEE Transactions on Parallel and Distributed Systems 29.4 (2017): 843-857.
  • [5] Haas, A. (2015) Fast concurrent data structures through timestamping. PhD thesis, University of Salzburg. http://www.cs.uni-salzburg.at/~ahaas/papers/thesis.pdf.
  • [6] Yang, C., Mellor-Crummey, J. (2016, February). A wait-free queue as fast as fetch-and-add. In Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (pp. 1-13).
  • [7] Heller, S., Herlihy, M., Luchangco, V., Moir, M., Scherer III, W. N., and Shavit, N., [2005] A lazy concurrent list-based set algorithm. 9th International Conference on Principles of Distributed Systems, LNCS 3974, Pisa, Italy, 12-14 December, pp. 3-16. Springer-Verlag, Berlin.
  • [8] Hoffman, M., Shalev, O., and Shavit, N. (2007) The baskets queue. 11th International Conference on Principles of Distributed Systems, LNCS 4878, Guadeloupe, French West Indies, 17-20 December, pp. 401-414. Springer-Verlag, Berlin.
  • [9] Henzinger, T. A., Sezgin, A., and Vafeiadis, V. (2013) Aspect-oriented linearizability proofs. 24th International Conference On Concurrency Theory (CONCUR 2013), LNCS 8052, Buenos Aires, Argentina, 27-30 August, pp. 242-256. Springer-Verlag, Berlin.
  • [10] Wen, T., Song, L., You, Z. (2019). Proving Linearizability Using Reduction. The Computer Journal, 62(9), 1342-1364.
  • [11] Liang, H., and Feng, X. (2013) Modular verification of linearizability with non-fixed linearization points. Conference on Programming Language Design and Implementation (PLDI 2013), Seattle, WA, USA, 16-19 June, pp. 459-470. ACM, New York.
  • [12]

    N. Shavit and D. Touitou. Elimination trees and the construction of pools and stacks. Theory of Computing Systems, 30:645–670, 1997.

  • [13] Moir, M., Nussbaum, D., Shalev, O., Shavit, N. (2005, July). Using elimination to implement scalable and lock-free fifo queues. In Proceedings of the seventeenth annual ACM symposium on Parallelism in algorithms and architectures (pp. 253-262).
  • [14] Hendler, D., Shavit, N., and Yerushalmi, L. (2004) A scalable lock-free stack algorithm. 16th annual ACM symposium on Parallelism in algorithms and architectures (SPAA 2004), Barcelona, Spain, 27-30 June, pp. 206-215. ACM, New York.
  • [15] M. Dodds, A. Haas, and C. M. Kirsch. A scalable, correct time-stamped stack. In POPL, 2015.
  • [16] Treiber, R. K. (1986) System programming: coping with parallelism, Technical Report RJ 5118, IBM Almaden Research Center. http://opac.inria.fr/record=b1015261.
  • [17] Y. Afek, E. Gafni, and A. Morrison, Common2 extended to stacks and unbounded concurrency, in Proc. ACM Symp. Principles Distrib. Comput., 2006, pp. 218-227
  • [18] Khyzha, A., Dodds, M., Gotsman, A., and Parkinson, M. (2017) Proving linearizability using partial orders. 26th European Symposium on Programming, LNCS 10201, Uppsala, Sweden, 22-29 April, pp. 639-667. Springer-Verlag, Berlin.
  • [19] Singh, V., Neamtiu, I., and Gupta, R. (2016) Proving concurrent data structures linearizable. 27th International Symposium on Software Reliability Engineering (ISSRE 2016), Ottawa, ON, Canada, 23-27 October, pp. 230-240. IEEE Computer Society, Los Alamitos, California.
  • [20] Schellhorn, G., Wehrheim, H., and Derrick, J. (2012). How to prove algorithms linearisable. International Conference on Computer Aided Verification (CAV 2012), LNCS 7358, Berkeley, CA, USA, 7-13 July, pp. 243-259. Springer-Verlag, Berlin.
  • [21] Bouajjani, A., Emmi, M., Enea, C., and Mutluergil, S. O. (2017) Proving linearizability using forward simulations. International Conference on Computer Aided Verification (CAV 2017), LNCS 10427, Heidelberg, Germany, 24-28 July, pp. 542-563. Springer-Verlag, Cham.
  • [22] Abdulla, P. A., Jonsson, B., and Trinh, C. Q. (2018) Fragment abstraction for concurrent shape analysis. European Symposium on Programming (ESOP 2018), LNCS 10801, Thessaloniki, Greece, 14-20 April, pp. 442-471. Springer-Verlag, Cham.
  • [23] Vindum, S. F., Frumin, D., Birkedal, L. (2021). Mechanized Verification of a Fine-Grained Concurrent Queue from Facebook’s Folly Library.
  • [24] Feldman, Y. M., Khyzha, A., Enea, C., Morrison, A., Nanevski, A., Rinetzky, N., Shoham, S. (2020). Proving highly-concurrent traversals correct. Proceedings of the ACM on Programming Languages, 4(OOPSLA), 1-29.
  • [25] Dongol, Brijesh, and John Derrick. Verifying linearisability: A comparative survey, ACM Computing Surveys (CSUR) 48.2 (2015): 1-43.