On the Cost of Concurrency in Hybrid Transactional Memory

07/05/2019 ∙ by Trevor Brown, et al. ∙ 0

State-of-the-art software transactional memory (STM) implementations achieve good performance by carefully avoiding the overhead of incremental validation (i.e., re-reading previously read data items to avoid inconsistency) while still providing progressiveness (allowing transactional aborts only due to data conflicts). Hardware transactional memory (HTM) implementations promise even better performance, but offer no progress guarantees. Thus, they must be combined with STMs, leading to hybrid TMs (HyTMs) in which hardware transactions must be instrumented (i.e., access metadata) to detect contention with software transactions. We show that, unlike in progressive STMs, software transactions in progressive HyTMs cannot avoid incremental validation. In fact, this result holds even if hardware transactions can read metadata non-speculatively. We then present opaque HyTM algorithms providing progressiveness for a subset of transactions that are optimal in terms of hardware instrumentation. We explore the concurrency vs. hardware instrumentation vs. software validation trade-offs for these algorithms. Our experiments with Intel and IBM POWER8 HTMs seem to suggest that (i) the cost of concurrency also exists in practice, (ii) it is important to implement HyTMs that provide progressiveness for a maximal set of transactions without incurring high hardware instrumentation overhead or using global contending bottlenecks and (iii) there is no easy way to derive more efficient HyTMs by taking advantage of non-speculative accesses within hardware.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Transactional Memory (TM) abstraction is a synchronization mechanism that allows the programmer to optimistically execute sequences of shared-memory operations as atomic transactions. Several software TM designs [9, 26, 14, 12] have been introduced subsequent to the original TM proposal based in hardware [15]. The original dynamic STM implementation DSTM [14] ensures that a transaction aborts only if there is a read-write data conflict with a concurrent transaction (à la progressiveness [13]). However, to satisfy opacity [13], read operations in DSTM must incrementally validate the responses of all previous read operations to avoid inconsistent executions. This results in quadratic (in the size of the transaction’s read set) step-complexity for transactions. Subsequent STM implementations like NOrec [9] and TL2 [11] minimize the impact on performance due to incremental validation. NOrec uses a global sequence lock that is read at the start of a transaction and performs value-based validation during read operations only if the value of the global lock has been changed (by an updating transaction) since reading it. TL2, on the other hand, eliminates incremental validation completely. Like NOrec, it uses a global sequence lock, but each data item also has an associated sequence lock value that is updated alongside the data item. When a data item is read, if its associated sequence lock value is different from the value that was read from the sequence lock at the start of the transaction, then the transaction aborts.

In fact, STMs like TL2 and NOrec ensure progress in the absence of data conflicts with O(1) step complexity read operations and invisible reads (read operations which do not modify shared memory). Nonetheless, TM designs that are implemented entirely in software still incur significant performance overhead. Thus, current CPUs have included instructions to mark a block of memory accesses as transactional [24, 1, 18], allowing them to be executed atomically in hardware. Hardware transactions promise better performance than STMs, but they offer no progress guarantees since they may experience spurious aborts. This motivates the need for hybrid TMs in which the fast hardware transactions are complemented with slower software transactions that do not have spurious aborts.

To allow hardware transactions in a HyTM to detect conflicts with software transactions, hardware transactions must be instrumented to perform additional metadata accesses, which introduces overhead. Hardware transactions typically provide automatic conflict detection at cacheline granularity, thus ensuring that a transaction will be aborted if it experiences memory contention on a cacheline. This is at least the case with Intel’s Transactional Synchronization Extensions [27]. The IBM POWER8 architecture additionally allows hardware transactions to access metadata non-speculatively, thus bypassing automatic conflict detection. While this has the advantage of potentially reducing contention aborts in hardware, this makes the design of HyTM implementations potentially harder to prove correct.

In [3], it was shown that hardware transactions in opaque progressive HyTMs must perform at least one metadata access per transactional read and write. In this paper, we show that in opaque progressive HyTMs with invisible reads, software transactions cannot avoid incremental validation. Specifically, we prove that each read operation of a software transaction in a progressive HyTM must necessarily incur a validation cost that is linear in the size of the transaction’s read set. This is in contrast to TL2 which is progressive and has constant complexity read operations. Thus, in addition to the linear instrumentation cost in hardware transactions, there is a quadratic step complexity cost in software transactions.

We then present opaque HyTM algorithms providing progressiveness for a subset of transactions that are optimal in terms of hardware instrumentation. Algorithm 1 is progressive for all transactions, but it incurs high instrumentation overhead in practice. Algorithm 2 avoids all instrumentation in fast-path read operations, but is progressive only for slow-path reading transactions. We also sketch how some hardware instrumentation can be performed non-speculatively without violating opacity.

Extensive experiments were performed to characterize the cost of concurrency in practice. We studied the instrumentation-optimal algorithms, as well as TL2, Transactional Lock Elision (TLE) [23] and Hybrid NOrec [25] on both Intel and IBM POWER architectures. Each of the algorithms we studied contributes to an improved understanding of the concurrency vs. hardware instrumentation vs. software validation trade-offs for HyTMs. Comparing results between the very different Intel and IBM POWER architectures also led to new insights. Collectively, our results suggest the following. (i) The cost of concurrency is significant in practice; high hardware instrumentation impacts performance negatively on Intel and much more so on POWER8 due to its limited transactional cache capacity. (ii) It is important to implement HyTMs that provide progressiveness for a maximal set of transactions without incurring high hardware instrumentation overhead or using global contending bottlenecks. (iii) There is no easy way to derive more efficient HyTMs by taking advantage of non-speculative accesses supported within the fast-path in POWER8 processors.

Roadmap. The rest of the paper is organized as follows. 2 presents details of the HyTM model that extends the model introduced in [3]. 3 presents our main lower bound result on the step-complexity of slow-path transactions in progressive HyTMs while 4 presents opaque HyTMs that are progressive for a subset of transactions. 5 presents results from experiments on Intel Haswell and IBM POWER8 architectures which provide a clear characterization of the cost of concurrency in HyTMs, and study the impact of non-speculative (or direct) accesses within hardware transactions on performance. 6 presents the related work along with concluding remarks. Formal proofs appear in the Appendix.

2 Hybrid transactional memory (HyTM)

Transactional memory (TM). A transaction is a sequence of transactional operations (or t-operations), reads and writes, performed on a set of transactional objects (t-objects). A TM implementation provides a set of concurrent processes with deterministic algorithms that implement reads and writes on t-objects using a set of base objects.

Configurations and executions. A configuration of a TM implementation specifies the state of each base object and each process. In the initial configuration, each base object has its initial value and each process is in its initial state. An event (or step) of a transaction invoked by some process is an invocation of a t-operation, a response of a t-operation, or an atomic primitive operation applied to base object along with its response. An execution fragment is a (finite or infinite) sequence of events . An execution of a TM implementation is an execution fragment where, informally, each event respects the specification of base objects and the algorithms specified by .

For any finite execution and execution fragment , denotes the concatenation of and , and we say that is an extension of . For every transaction identifier , denotes the subsequence of restricted to events of transaction . If is non-empty, we say that participates in , Let denote the set of transactions that participate in . Two executions and are indistinguishable to a set of transactions, if for each transaction , . A transaction is complete in if ends with a response event. The execution is complete if all transactions in are complete in . A transaction is t-complete if ends with or ; otherwise, is t-incomplete. We consider the dynamic programming model: the read set (resp., the write set) of a transaction in an execution , denoted (resp., ), is the set of t-objects that attempts to read (and resp. write) by issuing a t-read (resp., t-write) invocation in (for brevity, we sometimes omit the subscript from the notation).

We assume that base objects are accessed with read-modify-write (rmw) primitives. A rmw primitive event on a base object is trivial if, in any configuration, its application does not change the state of the object. Otherwise, it is called nontrivial. Events and of an execution contend on a base object if they are both primitives on in and at least one of them is nontrivial.

Hybrid transactional memory executions. We now describe the execution model of a Hybrid transactional memory (HyTM) implementation. In our HyTM model, shared memory configurations may be modified by accessing base objects via two kinds of primitives: direct and cached. (i) In a direct (also called non-speculative) access, the rmw primitive operates on the memory state: the direct-access event atomically reads the value of the object in the shared memory and, if necessary, modifies it. (ii) In a cached access performed by a process , the rmw primitive operates on the cached state recorded in process ’s tracking set .

More precisely, is a set of triples where is a base object identifier, is a value, and is an access mode. The triple is added to the tracking set when performs a cached rmw access of , where is set to if the access is nontrivial, and to otherwise. We assume that there exists some constant such that the condition must always hold; this condition will be enforced by our model. A base object is present in with mode if .

Hardware aborts. A tracking set can be invalidated by a concurrent process: if, in a configuration where (resp., , a process applies any primitive (resp., any nontrivial primitive) to , then becomes invalid and any subsequent event invoked by sets to and returns . We refer to this event as a tracking set abort.

Any transaction executed by a correct process that performs at least one cached access must necessarily perform a cache-commit primitive that determines the terminal response of the transaction. A cache-commit primitive issued by process with a valid does the following: for each base object such that , the value of in is updated to . Finally, is set to and the operation returns commit. We assume that a fast-path transaction returns as soon a cached primitive or cache-commit returns .

Slow-path and fast-path transactions. We partition HyTM transactions into fast-path transactions and slow-path transactions. A slow-path transaction models a regular software transaction. An event of a slow-path transaction is either an invocation or response of a t-operation, or a direct rmw primitive on a base object. A fast-path transaction essentially encapsulates a hardware transaction. Specifically, in any execution , we say that a transaction is a fast-path transaction if contains at least one cached event. An event of a hardware transaction is either an invocation or response of a t-operation, or a direct trivial access or a cached access, or a cache-commit primitive.

Remark 1 (Tracking set aborts).

Let be any t-incomplete fast-path transaction executed by process , where (resp., ) after execution , and be any event (resp., nontrivial event) that some process is poised to apply after . The next event of in any extension of is .

Remark 2 (Capacity aborts).

Any cached access performed by a process executing a fast-path transaction ; first checks the condition , where is a pre-defined constant, and if so, it sets and immediately returns .

Direct reads within fast-path. Note that we specifically allow hardware transactions to perform reads without adding the corresponding base object to the process’ tracking set, thus modeling the suspend/resume instructions supported by IBM POWER8 architectures. Note that Intel’s HTM does not support this feature: an event of a fast-path transaction does not include any direct access to base objects.

HyTM properties. We consider the TM-correctness property of opacity [13]: an execution is opaque if there exists a legal (every t-read of a t-object returns the value of its latest committed t-write) sequential execution equivalent to some t-completion of that respects the real-time ordering of transactions in . We also assume a weak TM-liveness property for t-operations: every t-operation returns a matching response within a finite number of its own steps if running step-contention free from a configuration in which every other transaction is t-complete. Moreover, we focus on HyTMs that provide invisible reads: t-read operations do not perform nontrivial primitives in any execution.

3 Progressive HyTM must perform incremental validation

In this section, we show that it is impossible to implement opaque progressive HyTMs with invisible reads with (1) step-complexity read operations for slow-path transactions. This result holds even if fast-path transactions may perform direct trivial accesses.

Formally, we say that a HyTM implementation is progressive for a set of transactions if in any execution of ; , if any transaction returns in , there exists another concurrent transaction that conflicts (both access the same t-object and at least one writes) with in  [13].

We construct an execution of a progressive opaque HyTM in which every t-read performed by a read-only slow-path transaction must access linear (in the size of the read set) number of distinct base objects.

Theorem 3.

Let be any progressive opaque HyTM implementation providing invisible reads. There exists an execution of and some slow-path read-only transaction that incurs a time complexity of ; .

Proof sketch. We construct an execution of a read-only slow-path transaction that performs distinct t-reads of t-objects . We show inductively that for each ; , the t-read must access distinct base objects during its execution. The (partial) steps in our execution are depicted in Figure 1.

For each , has an execution of the form depicted in Figure 0(b). Start with the complete step contention-free execution of slow-path read-only transaction that performs t-reads: , followed by the t-complete step contention-free execution of a fast-path transaction that writes to and commits and then the complete step contention-free execution fragment of that performs its t-read: . Indeed, by progressiveness, cannot incur tracking set aborts and since it accesses only a single t-object, it cannot incur capacity aborts. Moreover, in this execution, the t-read of by slow-path transaction must return the value written by fast-path transaction since this execution is indistinguishable to from the execution in Figure 0(a).

We now construct different executions of the form depicted in Figure 0(c): for each , a fast-path transaction (preceding in real-time ordering, but invoked following the t-reads by ) writes to and commits, followed by the t-read of by . Observe that, and which access mutually disjoint data sets cannot contend on each other since if they did, they would concurrently contend on some base object and incur a tracking set abort, thus violating progressiveness. Indeed, by the TM-liveness property we assumed (cf. Section 2) and invisible reads for , each of these executions exist.

In each of these executions, the final t-read of cannot return the new value : the only possible serialization for transactions is , , ; but the performed by that returns the initial value is not legal in this serialization—contradiction to the assumption of opacity. In other words, slow-path transaction is forced to verify the validity of t-objects in . Finally, we note that, for all ;, fast-path transactions and access mutually disjoint sets of base objects thus forcing the t-read of to access least different base objects in the worst case. Consequently, for all , slow-path transaction must perform at least steps while executing the t-read in such an execution.

How STM implementations mitigate the quadratic lower bound step complexity. NOrec [9] is a progressive opaque STM that minimizes the average step-complexity resulting from incremental validation of t-reads. Transactions read a global versioned lock at the start, and perform value-based validation during t-read operations iff the global version has changed. TL2 [11] improves over NOrec by circumventing the lower bound of Theorem 3. Concretely, TL2 associates a global version with each t-object updated during a transaction and performs validation with O(1) complexity during t-reads by simply verifying if the version of the t-object is greater than the global version read at the start of the transaction. Technically, NOrec and algorithms in this paper provide a stronger definition of progressiveness: a transaction may abort only if there is a prefix in which it conflicts with another transaction and both are t-incomplete. TL2 on the other hand allows a transaction to abort due to a concurrent conflicting transaction.

Implications for disjoint-access parallelism in HyTM. The property of disjoint-access parallelism (DAP), in its weakest form, ensures that two transactions concurrently contend on the same base object only if their data sets are connected in the conflict graph, capturing data-set overlaps among all concurrent transactions [5]. It is well known that weak DAP STMs with invisible reads must perform incremental validation even if the required TM-progress condition requires transactions to commit only in the absence of any concurrent transaction [13, 17]. For example, DSTM [14] is a weak DAP STM that is progressive and consequently incurs the validation complexity. On the other hand, TL2 and NOrec are not weak DAP since they employ a global versioned lock that mitigates the cost of incremental validation, but this allows two transactions accessing disjoint data sets to concurrently contend on the same memory location. Indeed, this inspires the proof of Theorem 3.

t-reads

commits

Slow-Path

Fast-Path

(a) Slow-path transaction performs distinct t-reads (each returning the initial value) followed by the t-read of that returns value written by fast-path transaction

t-reads

commits

Slow-Path

Fast-Path

(b) Fast-path transaction does not contend with any of the t-reads performed by and must be committed in this execution since it cannot incur a tracking set or capacity abort. The t-read of must return because this execution is indistinguishable to from 0(a)

t-reads

commits

commits

commits

commits

Slow-Path

Fast-Path

Fast-Path

Fast-Path

Fast-Path

Slow-Path

t-reads

(c) In each of these each executions, fast-path transactions cannot incur a tracking set or capacity abort. By opacity, the t-read of by cannot return new value . Therefore, to distinguish the different executions, t-read of by slow-path transaction is forced to access different base objects
Figure 1: Proof steps for Theorem 3

4 Hybrid transactional memory algorithms

Algorithm 4 Algorithm 4 TLE HybridNOrec
Instrumentation in fast-path reads per-read constant constant constant
Instrumentation in fast-path writes per-write per-write constant constant
Validation in slow-path reads none , but validation only if concurrency
h/w-s/f concurrency prog. prog. for slow-path readers zero not prog., but small contention window
Direct accesses inside fast-path yes no no yes
opacity yes yes yes yes
Figure 2: Table summarizing complexities of HyTM implementations

Instrumentation-optimal progressive HyTM. We describe a HyTM algorithm that is a tight bound for Theorem 3 and the instrumentation cost on the fast-path transactions established in [3]. Pseudocode appears in Algorithm 4. For every t-object , our implementation maintains a base object that stores the value of and a sequence lock .

Fast-path transactions: For a fast-path transaction executed by process , the implementation first reads (direct) and returns if some other process holds a lock on . Otherwise, it returns the value of . As with , the implementation returns if some other process holds a lock on ; otherwise process increments the sequence lock . If the cache has not been invalidated, updates the shared memory during by invoking the primitive.

Slow-path read-only transactions: Any invoked by a slow-path transaction first reads the value of the t-object from , adds to if its not held by a concurrent transaction and then performs validation on its entire read set to check if any of them have been modified. If either of these conditions is true, the transaction returns . Otherwise, it returns the value of . Validation of the read set is performed by re-reading the values of the sequence lock entries stored in .

Slow-path updating transactions: An updating slow-path transaction attempts to obtain exclusive write access to its entire write set. If all the locks on the write set were acquired successfully, performs validation of the read set and if successful, updates the values of the t-objects in shared memory, releases the locks and returns ; else aborts the transaction.

Direct accesses inside fast-path: Note that opacity is not violated even if the accesses of the sequence lock during t-read may be performed directly without incurring tracking set aborts.

Instrumentation-optimal HyTM that is progressive only for slow-path reading transactions. Algorithm 4 does not incur the linear instrumentation cost on the fast-path reading transactions (inherent to Algorithm 4), but provides progressiveness only for slow-path reading transactions. The instrumentation cost on fast-path t-reads is avoided by using a global lock that serializes all updating slow-path transactions during the procedure. Fast-path transactions simply check if this lock is held without acquiring it (similar to TLE [23]). While the per-read instrumentation overhead is avoided, Algorithm 4 still incurs the per-write instrumentation cost.

Sacrificing progressiveness and minimizing contention window. Observe that the lower bound in Theorem 3 assumes progressiveness for both slow-path and fast-path transactions along with opacity and invisible reads. Note that Algorithm 4 retains the validation step complexity cost since it provides progressiveness for slow-path readers.

Hybrid NOrec [8] is a HyTM implementation that does not satisfy progressiveness (unlike its STM counterpart NOrec), but mitigates the step-complexity cost on slow-path transactions by performing incremental validation during a transactional read iff the shared memory has changed since the start of the transaction. Conceptually, Hybrid NOrec uses a global sequence lock gsl that is incremented at the start and end of each transaction’s commit procedure. Readers can use the value of gsl to determine whether shared memory has changed between two configurations. Unfortunately, with this approach, two fast path transactions will always conflict on the gsl if their commit procedures are concurrent. To reduce the contention window for fast path transactions, the gsl is actually implemented as two separate locks (the second one called esl). A slow-path transaction locks both esl and gsl while it is committing. Instead of incrementing gsl, a fast path transaction checks if esl is locked and aborts if it is. Then, at the end of the fast path transaction’s commit procedure, it increments gsl twice (quickly locking and releasing it and immediately commits in hardware). Although the window for fast path transactions to contend on gsl is small, our experiments have shown that contention on gsl has a significant impact on performance.

1\textbf{Shared objects}
2    v$_j$, value of each t-object X$_j$
3    r$_{j}_j$
5\textbf{Code for fast-path transactions}
7read$_k(X_j)_j$ := v$_j$  \label{line:lin1}
8    or$_j$ := r$_j$\medcom direct read\label{line:hread}
9    if or$_j.\lit{isLocked}()A_k$
10    return ov$_j$
12write$_k(X_j,v)_j$ := r$_j$  \label{line:m1}
13    if or$_j.\lit{isLocked}()A_k$
14    r$_j$ := or$_j.\lit{IncSequence}()_j$ := v  \label{line:lin2}
15    return OK
17tryC$_k$()
18    commit-cache$_i$  \label{line:lin3}
20Function: release(Q)
21    for each X$_j$  do r$_j$ := or$_j.\lit{unlock}()_j$ 
22        if r$_j.\lit{tryLock}()_k$) := Lset(T$_k$)  {X$_j$}
23        else
24            release(Lset(T$_k$))
25            return false
26    return true
27%\end{lstlisting}
28%\end{minipage}
29%\begin{minipage}{0.54\textwidth}
30%\begin{lstlisting}[frame=none,firstnumber=last,mathescape=true]
32\textbf{Code for slow-path transactions}
34Read$_k$(X$_j$)
35    if X$_j$  Wset(T$_k$) then return Wset(T$_k$).locate(X$_j$)
36    or$_j$ := r$_j$ \label{line:readorec}
37    ov$_j$ := v$_j$ \label{line:read2}
38    Rset(T$_k$) := Rset(T$_k$)  {X$_j$,or$_j$} \label{line:rset}
39    if or$_j.\lit{isLocked}()A_k$ \label{line:abort0}
40    if not validate() then return  \label{line:valid}
41    return ov$_j$
43write$_k(X_j,v)_j$ := r$_j$
44    nv$_j$ := v
45    if or$_j.\lit{isLocked}()A_k$
46    Wset(T$_k$) := Wset(T$_k$)  {X$_j$, nv$_j$, or$_j$}
47    return OK
49tryC$_k$()
50    if Wset(T$_k$) =  then return   \label{line:return}
51    if not acquire(Wset(T$_k$)) then return   \label{line:acq}
52    if not validate()  \label{line:abort3}
53            release(Wset(T$_k$))
54            return A$_k$
55    for each X$_j$  Wset(T$_k$) do v$_j$:= nv$_j$ \label{line:write}
56    release(Wset(T$_k$))   \label{line:rel}
57    return C$_k$
59Function: validate()
60    if  X$_j$  Rset(T$_k$):or$_j.\lit{getSequence()} \neq$ r$_j.\lit{getSequence()}
 then return false\label{line:valid}
61    return true
1\textbf{Shared objects} 2    L, global lock 4\textbf{Code for fast-path transactions} 5start$_k$() 6    if L$.\lit{isLocked()}A_k$ 8read$_k$(X$_j$) 9    ov$_j$ := v$_j$ 10    return ov$_j$ 12write$_k$(X$_j$, v) 13    or$_j$ := r$_j$ 14    r$_j$ := or$_j.\lit{IncSequence}()_j$ := v 15    return OK 17try$C_k$() 18    return commit-cache$_i$ 64\textbf{Code for slow-path transactions} 66tryC$_k$() 67    if Wset(T$_k$) =  then return  68    L$.\lit{Lock}()_k$)) then return  69    if not validate() then 70        release(Wset(T$_k$)) 71        return  72    for each X$_j \in$ Wset(T$_k$) do v$_j$ := nv$_j$ 73    release(Wset(T$_k$)) 74    return C$_k$ 76Function: release(Q) 77    for each X$_j \in$ Q do r$_j$ := nr$_j.\lit{unlock}().\lit{unlock}()
; return OK

5 Evaluation

In this section, we study the performance characteristics of Algorithms 4 and 4, Hybrid NOrec, TLE and TL2. Our experimental goals are: (G1) to study the performance impact of instrumentation on the fast-path and validation on the slow-path, (G2) to understand how HyTM algorithm design affects performance with Intel and IBM POWER8 HTMs, and (G3) to determine whether direct accesses can be used to obtain performance improvements on IBM POWER8 using the supported suspend/resume instruction to escape from a hardware transaction.

Experimental system (Intel). The experimental system is a 2-socket Intel E7-4830 v3 with 12 cores per socket and 2 hyperthreads (HTs) per core, for a total of 48 threads. Each core has a private 32KB L1 cache and 256KB L2 cache (shared between HTs on a core). All cores on a socket share a 30MB L3 cache. This system has a non-uniform memory architecture (NUMA) in which threads have significantly different access costs to different parts of memory depending on which processor they are currently executing on. The machine has 128GB of RAM, and runs Ubuntu 14.04 LTS. All code was compiled with the GNU C++ compiler (G++) 4.8.4 with build target x86_64-linux-gnu and compilation options -std=c++0x -O3 -mx32.

We pin threads so that the first socket is saturated before we place any threads on the second socket. Thus, thread counts 1-24 run on a single socket. Furthermore, hyperthreading is engaged on the first socket for thread counts 13-24, and on the second socket for thread counts 37-48. Consequently, our graphs clearly show the effects of NUMA and hyperthreading.

Experimental system (IBM POWER8). The experimental system is a IBM S822L with 2x 12-core 3.02GHz processor cards, 128GB of RAM, running Ubuntu 16.04 LTS. All code was compiled using G++ 5.3.1. This is a dual socket machine, and each socket has two NUMA zones. It is expensive to access memory on a different NUMA zone, and even more expensive if the NUMA zone is on a different socket. POWER8 uses the L2 cache for detecting tracking set aborts, and limits the size of a transaction’s read- and write-set to 8KB each [21]. This is in contrast to Intel which tracks conflicts on the entire L3 cache, and only limits a transaction’s read-set to the L3 cache size, and its write-set to the L1 cache size.

We pin one thread on each core within a NUMA zone before moving to the next zone. We remark that unlike the thread pinning policy for Intel which saturated the first socket before moving to the next, this proved to be the best policy for POWER8 which experiences severe negative scaling when threads are saturated on a single 8-way hardware multi-threaded core. This is because all threads on a core share resources, including the L1 and L2 cache, a single branch execution pipeline, and only two load-store pipelines.

Hybrid TM implementations. For TL2, we used the implementation published by its authors. We implemented the other algorithms in C++. Each hybrid TM algorithm first attempts to execute a transaction on the fast-path, and will continue to execute on the fast-path until the transaction has experienced 20 aborts, at which point it will fall back to the slow-path. We implemented Algorithm 4 on POWER8 where each read of a sequence lock during a transactional read operation was enclosed within a pair of suspend/resume instructions to access them without incurring tracking set aborts (Algorithm 4). We remark that this does not affect the opacity of the implementation. We also implemented the variant of Hybrid NOrec (Hybrid NOrec) in which the update to gsl is performed using a fetch-increment primitive between suspend/resume instructions, as is recommended in [25].

In each algorithm, instead of placing a lock next to each address in memory, we allocated a global array of one million locks, and used a simple hash function to map each address to one of these locks. This avoids the problem of having to change a program’s memory layout to incorporate locks, and greatly reduces the amount of memory needed to store locks, at the cost of some possible false conflicts since many addresses map to each lock. Note that the exact same approach was taken by the authors of TL2.

We chose not to compile the hybrid TMs as separate libraries, since invoking library functions for each read and write can cause algorithms to incur enormous overhead. Instead, we compiled each hybrid TM directly into the code that uses it.

Experimental methodology. We used a simple unbalanced binary search tree (BST) microbenchmark as a vehicle to study the performance of our implementations. The BST implements a dictionary, which contains a set of keys, each with an associated value. For each TM algorithm and update rate , we run six timed trials for several thread counts . Each trial proceeds in two phases: prefilling and measuring. In the prefilling phase, concurrent threads perform 50% Insert and 50% Delete operations on keys drawn uniformly randomly from until the size of the tree converges to a steady state (containing approximately keys). Next, the trial enters the measuring phase, during which threads begin counting how many operations they perform. In this phase, each thread performs % Insert, % Delete and % Search operations, on keys/values drawn uniformly from , for one second.

Uniformly random updates to an unbalanced BST have been proven to yield trees of logarithmic height with high probability. Thus, in this type of workload, almost all transactions succeed in hardware, and the slow-path is almost never used. To study performance when transactions regularly run on slow-path, we introduced an operation called a

RangeIncrement that often fails in hardware and must run on the slow-path. A RangeIncrement atomically increments the values associated with each key in the range present in the tree. Note that a RangeIncrement is more likely to experience data conflicts and capacity aborts than BST updates, which only modify a single node.

We consider two types of workloads: (W1) all threads perform Insert, Delete and Search, and (W2) threads perform Insert, Delete and Search and one thread performs only RangeIncrement operations. Figure 3 shows the results for both types of workloads.

2x12-core Intel E7-4830v3 No threads perform RangeIncrement (W1) One thread performs RangeIncrement (W2) 0% updates 10% updates 40% updates 2x12-core IBM POWER8 No threads perform RangeIncrement (W1) One thread performs RangeIncrement (W2) 0% updates 10% updates 40% updates
Figure 3: Results for a BST microbenchmark. The x-axis represents the number of concurrent threads. The y-axis represents operations per microsecond.

Results (Intel). We first discuss the 0% updates graph for workload type W1. In this graph, essentially all operations committed in hardware. In fact, in each trial, a small fraction of 1% of operations ran on the slow-path. Thus, any performance differences shown in the graph are essentially differences in the performance of the algorithms’ respective fast-paths (with the exception of TL2). Algorithm 4, which has instrumentation in its fast-path read operations, has significantly lower performance than Algorithm 4, which does not. Since this is a read-only workload, this instrumentation is responsible for the performance difference.

In the W1 workloads, TLE, Algorithm 4 and Hybrid NOrec perform similarly (with a small performance advantage for Hybrid NOrec at high thread counts). This is because the fast-paths for these three algorithms have similar amounts of instrumentation: there is no instrumentation for reads or writes, and the transaction itself incurs one or two metadata accesses. In contrast, in the W2 workloads, TLE performs quite poorly, compared to the HyTM algorithms. In these workloads, transactions must periodically run on the slow-path, and in TLE, this entails acquiring a global lock that restricts progress for all other threads. At high thread counts this significantly impacts performance. Its performance decreases as the sizes of the ranges passed to RangeIncrement increase. Its performance is also negatively impacted by NUMA effects at thread counts higher than 24. (This is because, when a thread reads the lock and incurs a cache miss, if the lock was last held by another thread on the same socket, then can fill the cache miss by loading it from the shared L3 cache. However, if the lock was last held by a thread on a different socket, then must read the lock state from main memory, which is significantly more expensive.) On the other hand, in each graph in the W2 workloads, the performance of each HyTM (and TL2) is similar to its performance in the corresponding W1 workload graph. For Algorithm 4 (and TL2), this is because of progressiveness. Although Algorithm 4 is not truly progressive, fast-path transactions will abort only if they are concurrent with the commit procedure of a slow-path transaction. In RangeIncrement operations, there is a long read-only prefix (which is exceptionally long because of Algorithm 4’s quadratic validation) followed by a relatively small set of writes. Thus, RangeIncrement operations have relatively little impact on the fast-path. The explanation is similar for Hybrid NOrec (except that it performs less validation than Algorithm 4).

Observe that the performance of Hybrid NOrec decreases slightly, relative to Algorithm 4, after 24 threads. Recall that, in Hybrid NOrec, the global sequence number is a single point of contention on the fast-path. (In Algorithm 4, the global lock is only modified by slow-path transactions, so fast-path transactions do not have a single point of contention.) We believe this is due to NUMA effects, similar to those described in [6]. Specifically, whenever a threads on the first socket performs a fast-path transaction that commits and modifies the global lock, it causes cache invalidations for all other threads. Threads on socket two must then load the lock state from main memory, which takes much longer than loading it from the shared L3 cache. This lengthens the transaction’s window of contention, making it more likely to abort. (In the 0% updates graph in the W2 workload, we still see this effect, because there is a thread performing RangeIncrement operations.)

Results (IBM POWER8). Algorithm 4 performs poorly on POWER8: POWER8 transactions can only load 64 cache lines before they will abort [22]. Transactions read locks and tree nodes, which are in different cache lines: together, they often exceed 64 cache lines loaded in a tree operation, so most transactions cannot succeed in hardware. Consequently, on POWER8, it is incredibly important either to have minimal instrumentation in transactions, or for metadata to be located in the same cache lines as program data. Of course, the latter is not possible for HyTMs, which do not have control over the layout of program data. Consequently, Algorithm 4 outperforms Algorithm 4 in POWER8 quite easily by avoiding the per-read instrumentation.

Algorithm 4 is improved slightly by the expensive (on POWER8) suspend/resume on sequence locks during transactional reads, but it still performs relatively poorly. To make suspend/resume a practical tool, one could imagine attempting to collect several metadata accesses and perform them together to amortize the cost of a suspend/resume pair. For instance, in Algorithm 4, one might try to update the locks for all of the transactional writes at once, when the transaction commits. Typically one would accomplish this by logging all writes so that a process can remember which addresses it must lock at commit time. However, logging the writes inside the transaction would be at least as costly as just performing them.

Observe that Hybrid NOrec does far worse with updates in POWER8 than on the Intel machine. This is due to the fact that fetch-increment on a single location experiences severe negative scaling on the POWER8 processor: e.g., in one second, a single thread can perform 37 fetch-add operations while 6 threads perform a total of 9 million and 24 threads perform only 4 million fetch-add operations. In contrast, the Intel machine performs 32 million operations with 6 threads and 45 million with 24 threads. This is likely because this Intel processor provides fetch-add instructions while it must be emulated on the POWER8 processor.

In Hybrid NOrec, the non-speculative increment of gsl actually makes performance worse. Recall that in Hybrid NOrec, if a fast-path transaction increments gsl, and then a software transaction reads gsl (as part of validation) before commits, then will abort, and will not see ’s change to gsl. So, will have a higher chance of avoiding incremental validation (and, hence, will likely take less time to run, and have a smaller contention window). However, in Hybrid NOrec, once increments gsl, will see the change to gsl, regardless of whether commits or aborts. Thus, will be forced to perform incremental validation. In our experiments, we observed that a much larger number of transactions ran on the fallback path in Hybrid NOrec than in Hybrid NOrec (often several orders of magnitude more).

6 Related work and discussion

HyTM implementations and complexity. Early HyTMs like the ones described in [10, 16] provided progressiveness, but subsequent HyTM proposals like PhTM [19] and HybridNOrec [8] sacrificed progressiveness for lesser instrumentation overheads. However, the clear trade-off in terms of concurrency vs. instrumentation for these HyTMs have not been studied in the context of currently available HTM architectures. This instrumentation cost on the fast-path was precisely characterized in [3]. In this paper, we proved the inherent cost of concurrency on the slow-path thus establishing a surprising, but intuitive complexity separation between progressive STMs and HyTMs. Moreover, to the best of our knowledge, this is the first work to consider the theoretical foundations of the cost of concurrency in HyTMs in theory and practice (on currently available HTM architectures). Proof of Theorem 3 is based on the analogous proof for step complexity of STMs that are disjoint-access parallel [17, 13]. Our implementation of Hybrid NOrec follows [25], which additionally proposed the use of direct accesses in fast-path transactions to reduce instrumentation overhead in the AMD Advanced Synchronization Facility (ASF) architecture.

Beyond the two path HyTM approach. Employing an uninstrumented fast fast-path. We now describe how every transaction may first be executed in a “fast” fast-path with almost no instrumentation and if unsuccessful, may be re-attempted in the fast-path and subsequently in slow-path. Specifically, we transform any opaque HyTM to an opaque HyTM in which a shared fetch-and-add metadata base object that slow-path updating transactions increment (and resp. decrement) at the start (and resp. end). In , a “fast” fast-path transaction checks first if is and if not, aborts the transaction; otherwise the transaction is continued as an uninstrumented hardware transaction. The code for the fast-path and the slow-path is identical to .

Other approaches. Recent work has investigated fallback to reduced hardware transactions [20] in which an all-software slow-path is augmented using a slightly faster slow-path that is optimistically used to avoid running some transactions on the true software-only slow-path. Amalgamated lock elision (ALE) was proposed in [2] which improves over TLE by executing the slow-path as a series of segments, each of which is a dynamic length hardware transaction. Invyswell [7] is a HyTM design with multiple hardware and software modes of execution that gives flexibility to avoid instrumentation overhead in uncontended executions.

We remark that such multi-path approaches may be easily applied to each of the Algorithms proposed in this paper. However, in the search for an efficient HyTM, it is important to strike the fine balance between concurrency, hardware instrumentation and software validation cost. Our lower bound, experimental methodology and evaluation of HyTMs provides the first clear characterization of these trade-offs in both Intel and POWER8 architectures.

References

  • [1] Advanced Synchronization Facility Proposed Architectural Specification, March 2009. http://developer.amd.com/wordpress/media/2013/09/45432-ASF_Spec_2.1.pdf.
  • [2] Yehuda Afek, Alexander Matveev, Oscar R. Moll, and Nir Shavit. Amalgamated lock-elision. In Proceedings of 29th Int. Sym. on Distributed Computing, DISC ’15, pages 309–324, 2015. URL: http://dx.doi.org/10.1007/978-3-662-48653-5_21, doi:10.1007/978-3-662-48653-5_21.
  • [3] Dan Alistarh, Justin Kopinsky, Petr Kuznetsov, Srivatsan Ravi, and Nir Shavit. Inherent limitations of hybrid transactional memory. In Proceedings of 29th Int. Sym. on Distributed Computing, DISC ’15, pages 185–199, 2015. URL: http://dx.doi.org/10.1007/978-3-662-48653-5_13, doi:10.1007/978-3-662-48653-5_13.
  • [4] Hagit Attiya, Sandeep Hans, Petr Kuznetsov, and Srivatsan Ravi. Safety of deferred update in transactional memory. In 2013 IEEE 33rd Int. Conf. on Distributed Computing Systems (ICDCS), pages 601–610, Los Alamitos, CA, USA, 2013. IEEE Computer Society. doi:http://doi.ieeecomputersociety.org/10.1109/ICDCS.2013.57.
  • [5] Hagit Attiya, Eshcar Hillel, and Alessia Milani. Inherent limitations on disjoint-access parallel implementations of transactional memory. Theory of Computing Systems, 49(4):698–719, 2011. URL: http://dx.doi.org/10.1007/s00224-010-9304-5, doi:10.1007/s00224-010-9304-5.
  • [6] Trevor Brown, Alex Kogan, Yossi Lev, and Victor Luchangco. Investigating the performance of hardware transactions on a multi-socket machine. In Proceedings of 28th ACM Sym. on Parallelism in Algorithms and Architectures, SPAA ’16, pages 121–132, 2016. URL: http://doi.acm.org/10.1145/2935764.2935796, doi:10.1145/2935764.2935796.
  • [7] Irina Calciu, Justin Gottschlich, Tatiana Shpeisman, Gilles Pokam, and Maurice Herlihy. Invyswell: a hybrid transactional memory for haswell’s restricted transactional memory. In Int. Conf. on Par. Arch. and Compilation, PACT ’14, pages 187–200, 2014. URL: http://doi.acm.org/10.1145/2628071.2628086, doi:10.1145/2628071.2628086.
  • [8] Luke Dalessandro, Francois Carouge, Sean White, Yossi Lev, Mark Moir, Michael L. Scott, and Michael F. Spear. Hybrid NOrec: a case study in the effectiveness of best effort hardware transactional memory. In ASPLOS ’11, pages 39–52. ACM, 2011. URL: http://doi.acm.org/10.1145/1950365.1950373.
  • [9] Luke Dalessandro, Michael F. Spear, and Michael L. Scott. Norec: Streamlining stm by abolishing ownership records. SIGPLAN Not., 45(5):67–78, January 2010. URL: http://doi.acm.org/10.1145/1837853.1693464, doi:10.1145/1837853.1693464.
  • [10] Peter Damron, Alexandra Fedorova, Yossi Lev, Victor Luchangco, Mark Moir, and Daniel Nussbaum. Hybrid transactional memory. SIGPLAN Not., 41(11):336–346, October 2006. URL: http://doi.acm.org/10.1145/1168918.1168900, doi:10.1145/1168918.1168900.
  • [11] Dave Dice, Ori Shalev, and Nir Shavit. Transactional locking ii. In Proceedings of the 20th International Conference on Distributed Computing, DISC’06, pages 194–208, Berlin, Heidelberg, 2006. Springer-Verlag. URL: http://dx.doi.org/10.1007/11864219_14, doi:10.1007/11864219_14.
  • [12] K. Fraser. Practical lock-freedom. Technical report, Cambridge University Computer Laboratory, 2003.
  • [13] Rachid Guerraoui and Michal Kapalka. Principles of Transactional Memory, Synthesis Lectures on Distributed Computing Theory. Morgan and Claypool, 2010.
  • [14] Maurice Herlihy, Victor Luchangco, Mark Moir, and William N. Scherer, III. Software transactional memory for dynamic-sized data structures. In Proc. of 22nd Int. Sym. on Principles of Distr. Comp., PODC ’03, pages 92–101, New York, NY, USA, 2003. ACM. URL: http://doi.acm.org/10.1145/872035.872048, doi:10.1145/872035.872048.
  • [15] Maurice Herlihy and J. Eliot B. Moss. Transactional memory: architectural support for lock-free data structures. In ISCA, pages 289–300, 1993.
  • [16] Sanjeev Kumar, Michael Chu, Christopher J. Hughes, Partha Kundu, and Anthony Nguyen. Hybrid transactional memory. In Proceedings of the Eleventh ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP ’06, pages 209–220, New York, NY, USA, 2006. ACM. URL: http://doi.acm.org/10.1145/1122971.1123003.
  • [17] Petr Kuznetsov and Srivatsan Ravi. Progressive transactional memory in time and space. In Proceedings of 13th Int. Conf. on Parallel Computing Technologies, PaCT ’15, pages 410–425, 2015. URL: http://dx.doi.org/10.1007/978-3-319-21909-7_40, doi:10.1007/978-3-319-21909-7_40.
  • [18] Hung Q. Le, G. L. Guthrie, Derek Williams, Maged M. Michael, Brad Frey, William J. Starke, Cathy May, Rei Odaira, and Takuya Nakaike. Transactional memory support in the IBM POWER8 processor. IBM Journal of Research and Development, 59(1), 2015. URL: http://dx.doi.org/10.1147/JRD.2014.2380199.
  • [19] Yossi Lev, Mark Moir, and Dan Nussbaum. Phtm: Phased transactional memory. In In Workshop on Transactional Computing (Transact), 2007.
  • [20] Alexander Matveev and Nir Shavit. Reduced hardware transactions: a new approach to hybrid transactional memory. In Proceedings of the 25th ACM symposium on Parallelism in algorithms and architectures, pages 11–22. ACM, 2013.
  • [21] Takuya Nakaike, Rei Odaira, Matthew Gaudet, Maged M. Michael, and Hisanobu Tomari. Quantitative comparison of hardware transactional memory for Blue Gene/Q, zEnterprise EC12, Intel Core, and POWER8. In Proc. of 42nd Int. Sym. on Comp. Arch., ISCA ’15, pages 144–157, NY, USA, 2015. URL: http://doi.acm.org/10.1145/2749469.2750403.
  • [22] Andrew T. Nguyen. Investigation of hardware transactional memory. 2015. http://groups.csail.mit.edu/mag/Andrew-Nguyen-Thesis.pdf.
  • [23] Ravi Rajwar and James R. Goodman. Speculative lock elision: Enabling highly concurrent multithreaded execution. In Proc. of 34th ACM/IEEE Int. Sym. on Microarchitecture, MICRO ’01, pages 294–305, Washington, DC, USA, 2001. URL: http://dl.acm.org/citation.cfm?id=563998.564036.
  • [24] James Reinders. Transactional Synchronization in Haswell, 2012. http://software.intel.com/en-us/blogs/2012/02/07/transactional-synchronization-in-haswell/.
  • [25] Torvald Riegel, Patrick Marlier, Martin Nowack, Pascal Felber, and Christof Fetzer. Optimizing hybrid transactional memory: The importance of nonspeculative operations. In Proc. of 23rd ACM Sym. on Parallelism in Algs. and Arch., pages 53–64. ACM, 2011.
  • [26] Nir Shavit and Dan Touitou. Software transactional memory. In Principles of Distributed Computing (PODC), pages 204–213, 1995.
  • [27] Richard M. Yoo, Christopher J. Hughes, Konrad Lai, and Ravi Rajwar. Performance evaluation of intel® transactional synchronization extensions for high-performance computing. In Proceedings of Int. Conf. on High Perf. Computing, Networking, Storage and Analysis, SC ’13, pages 19:1–19:11, New York, NY, USA, 2013. URL: http://doi.acm.org/10.1145/2503210.2503232, doi:10.1145/2503210.2503232.

Appendix A Proof of opacity for algorithms

We will prove the opacity of Algorithm 4 even if some of accesses performed by fast-path transactions are direct (as indicated in the pseudocode). Analogous arguments apply to Algorithm 4.

Let by any execution of Algorithm 4. Since opacity is a safety property, it is sufficient to prove that every finite execution is opaque [4]. Let denote a total-order on events in .

Let denote a subsequence of constructed by selecting linearization points of t-operations performed in . The linearization point of a t-operation , denoted as is associated with a base object event or an event performed during the execution of using the following procedure.

Completions. First, we obtain a completion of by removing some pending invocations or adding responses to the remaining pending invocations. Incomplete , operation performed by a slow-path transaction is removed from ; an incomplete is removed from if has not performed any write to a base object ; in Line LABEL:line:write, otherwise it is completed by including after . Every incomplete , , and performed by a fast-path transaction is removed from .

Linearization points. Now a linearization of is obtained by associating linearization points to t-operations in the obtained completion of . For all t-operations performed a slow-path transaction , linearization points as assigned as follows:

  • For every t-read that returns a non-A value, is chosen as the event in Line LABEL:line:read2 of Algorithm 4, else, is chosen as invocation event of

  • For every that returns, is chosen as the invocation event of

  • For every that returns such that , is associated with the first write to a base object performed by when invoked in Line LABEL:line:rel, else if returns , is associated with the invocation event of

  • For every that returns such that , is associated with Line LABEL:line:return

For all t-operations performed a fast-path transaction , linearization points are assigned as follows:

  • For every t-read that returns a non-A value, is chosen as the event in Line LABEL:line:lin1 of Algorithm 4, else, is chosen as invocation event of

  • For every that is a , is the primitive invoked by

  • For every that is a , is the event in Line LABEL:line:lin2.

denotes a total-order on t-operations in the complete sequential history .

Serialization points. The serialization of a transaction , denoted as is associated with the linearization point of a t-operation performed by the transaction.

We obtain a t-complete history from as follows. A serialization is obtained by associating serialization points to transactions in as follows: for every transaction in that is complete, but not t-complete, we insert immediately after the last event of in . If is an updating transaction that commits, then is . If is a read-only or aborted transaction, then is assigned to the linearization point of the last t-read that returned a non-A value in .

denotes a total-order on transactions in the t-sequential history . Since for a given transaction, its serialization point is chosen between the first and last event of the transaction, if , then implies .

Throughout this proof, we consider that process executing fast-path transaction does not include the sequence lock in the tracking set of when accessed in Line LABEL:line:hread during .

Claim 4.

If every transaction is fast-path, then is legal.

Proof.

Recall that Algorithm 4 performs direct accesses only during the t-read operation in Line LABEL:line:hread which involves reading the sequence lock corresponding to t-object . However, any two fast-path transactions accessing conflicting data sets must necessarily incur a tracking abort (cf. Remark 1) in . It follows immediately that must be legal. ∎

Claim 5.

is legal, i.e., every t-read returns the value of the latest committed t-write in .

Proof.

We claim that for every , there exists some slow-path transaction (or resp. fast-path) that performs and completes the event in Line LABEL:line:write (or resp. Line LABEL:line:lin2) such that .

Suppose that is a slow-path transaction: since returns the response , the event in Line LABEL:line:read2 succeeds the event in Line LABEL:line:write performed by . Since can return a non-abort response only after releases the lock on in Line 4, must be committed in . Consequently, . Since, for any updating committing transaction , , it follows that .

Otherwise if is a fast-path transaction, then clearly is a committed transaction in . Recall that can read during the event in Line LABEL:line:read2 only after applies the primitive. By the assignment of linearization points, and thus, .

Thus, to prove that is legal, it suffices to show that there does not exist a transaction that returns in and performs ; such that .

and are both updating transactions that commit. Thus, () () and () ().

Since, reads the value of written by , one of the following is true: or .

Suppose that .

(Case I:) and are slow-path transactions.

Thus, returns a response from the event in Line LABEL:line:acq before the read of the base object associated with by in Line LABEL:line:read2. Since and are both committed in , returns true from the event in Line LABEL:line:acq only after releases in Line 4.

If is a slow-path transaction, recall that checks if is locked by a concurrent transaction, then performs read-validation (Line LABEL:line:abort0) before returning a matching response. Indeed, must return in any such execution.

If is a fast-path transaction, it follows that must return immediately from Remark 1.

Thus, .

(Case II:) is a slow-path transaction and is a fast-path transaction. Thus, returns before the read of the base object associated with by in Line LABEL:line:read2, but after the response of acquire by in Line LABEL:line:acq. Since reads the value of to be and not , performs the cas to in Line LABEL:line:write after the performs the primitive (since if otherwise, would be aborted in ). But then the cas on performed by would return and would return —contradiction.

(Case III:) is a slow-path transaction and is a fast-path transaction. This is analogous to the above case.

(Case IV:) and are fast-path transactions. Follows immediately from Claim 4.

We now need to prove that indeed precedes in . Consider the two possible cases. Suppose that is a read-only transaction. Then, is assigned to the last t-read performed by that returns a non-A value. If is not the last t-read that returned a non-A value, then there exists a such that . But then this t-read of must abort by performing the checks in Line LABEL:line:abort0 or incur a tracking set abort—contradiction.

Otherwise suppose that is an updating transaction that commits, then which implies that . Then, must neccesarily perform the checks in Line LABEL:line:abort3 and return or incur a tracking set abort—contradiction to the assumption that is a committed transaction.∎

Since is legal and respects the real-time ordering of transactions, Algorithm 4 is opaque.

Appendix B Proof of Theorem 3

The proof of the lemma below is a simple extension of the analogous lemma from [3] allowing direct trivial accesses inside fast-path transactions which in turn is inspired by an analogous result concerning disjoint-access parallel STMs [5]. Intuitively, the proof follows follows from the fact that the tracking set of a process executing a fast-path transaction is invalidated due to contention on a base object with another transaction (cf. Remark 1).

Lemma 6.

Let be any progressive HyTM implementation in which fast-path transactions may perform trivial direct accesses. Let be an execution of where (and resp. ) is the step contention-free execution fragment of transaction (and resp. ) executed by process (and resp. ), and do not conflict in , and at least one of or is a fast-path transaction. Then, and do not contend on any base object in .

Proof.

Suppose, by contradiction that and contend on the same base object in .

If in , performs a nontrivial event on a base object on which they contend, let be the last event in in which performs such an event to some base object and , the first event in that accesses (note that by assumption, is a direct access). Otherwise, only performs trivial events in to base objects (some of which may be direct) on which it contends with in : let be the first event in in which performs a nontrivial event to some base object on which they contend and , the last event of in that accesses .

Let (and resp. ) be the longest prefix of (and resp. ) that does not include (and resp. ). Since before accessing , the execution is step contention-free for , is an execution of . By assumption of lemma, and do not conflict in . By construction, is indistinguishable to from . Hence, and are poised to apply contending events and on in the execution .

We now consider two cases:

  1. ( is a nontrivial event) After , is contained in the tracking set of process in exclusive mode and in the extension , we have that is invalidated. Thus, by Remark 1, transaction must return in any extension of —a contradiction to the assumption that is progressive.

  2. ( is a trivial event) Recall that may be potentially an event involving a direct access. Consider the execution following which is contained in the tracking set of process in exclusive mode. Clearly, we have an extension in which is invalidated. Thus transaction must return in any extension of —a contradiction to the assumption that is progressive.

Theorem 7.

Let be any progressive opaque HyTM implementation providing invisible reads. There exists an execution of and some slow-path read-only transaction that incurs a time complexity of ; .

Proof.

For all ; , let be the initial value of t-object . Let denote the complete step contention-free execution of a slow-path transaction that performs t-reads: such that for all , .

Claim 8.

For all , has an execution of the form where,

  • is the complete step contention-free execution of slow-path read-only transaction that performs t-reads: ,

  • is the t-complete step contention-free execution of a fast-path transaction that writes to and commits,

  • is the complete step contention-free execution fragment of that performs its t-read: .

Proof.

has an execution of the form . Since in , by Lemma 6, transactions and do not contend on any base object in execution . Moreover, since they each access a single t-object, fast-path transaction cannot incur a capacity abort. Thus, is also an execution of .

By opacity, (Figure 0(a)) is an execution of in which the t-read of performed by must return . But is indistinguishable to from . Thus, has an execution of the form