Write-Optimized and Consistent RDMA-based NVM Systems

06/18/2019 ∙ by Xinxin Liu, et al. ∙ 0

In order to deliver high performance in cloud computing, we generally exploit and leverage RDMA (Remote Direct Memory Access) in networking and NVM (Non-Volatile Memory) in end systems. Due to no involvement of CPU, one-sided RDMA becomes efficient to access the remote memory, and NVM technologies have the strengths of non-volatility, byte-addressability and DRAM-like latency. In order to achieve end-to-end high performance, many efforts aim to synergize one-sided RDMA and NVM. Due to the need to guarantee Remote Data Atomicity (RDA), we have to consume extra network round-trips, remote CPU participation and double NVM writes. In order to address these problems, we propose a zero-copy log-structured memory design for Efficient Remote Data Atomicity, called Erda. In Erda, clients directly transfer data to the destination address at servers via one-sided RDMA writes without redundant copy and remote CPU consumption. To detect the incompleteness of fetched data, we verify a checksum without client-server coordination. We further ensure metadata consistency by leveraging an 8-byte atomic update in the hash table, which also contains the address information for the stale data. When a failure occurs, the server properly restores to a consistent version. Experimental results show that compared with Redo Logging (a CPU involvement scheme) and Read After Write (a network dominant scheme), Erda reduces NVM writes approximately by 50 as significantly improves throughput and decreases latency.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Cloud computing requires high performance in both network transmission and local I/O throughput. Remote direct memory access (RDMA) technologies have become more important for cloud computing [25, 9]. RDMA allows to directly access remote memory via bypassing kernel and zero memory copy, thus providing high bandwidth and low latency for remote memory accesses [26]. Moreover, non-volatile memory (NVM) technologies have the strengths of non-volatility, byte-addressability, high density and DRAM-class latency in end systems. NVM can be directly accessed through the network and local memory bus with RDMA protocol and CPU load/store instructions [39]. Many schemes thus synergize RDMA and NVM to deliver end-to-end high performance [12, 15, 18, 14, 25, 36].

Since one-sided RDMA operations (read, write and atomic) do not involve remote CPU but two-sided (send and recv) do, one-sided primitives provide higher bandwidth and lower latency than two-sided one. For CPU-intensive workloads, even if one-sided primitives require more network round-trips than two-sided primitives, one-sided primitives are still faster than two-sided primitives [28, 18]. However, using one-sided RDMA to access remote NVM becomes inefficient due to the challenges of guaranteeing Remote Data Atomicity (RDA): Incomplete writes from failures are durable in NVM, resulting in inconsistent data. The server is unaware of the incomplete and invalid data in NVM due to no CPU involvement in the context of the one-sided RDMA operations. The client is also unaware of the possible data loss in the server, because the returned ACK of RDMA write from the server merely means that the data have reached the volatile cache of the server NIC, and possibly fail to be flushed into NVM.

However, many existing RDMA-based NVM systems overlook RDA and become inefficient in system performance [18, 12, 14, 2]. For example, in collect-dispatch transaction in Octopus [18], a coordinator uses one-sided RDMA write to update the write sets in participants, which are unaware of the incomplete data without the CPU involvements of participants. Hence, if a failure occurs before the written data are fully flushed from the volatile cache of the participants NIC into NVM, the write sets will be partially applied and durable in NVM, which is neither the “old” nor the “new” version, thus becoming inconsistent [39, 25].

In order to guarantee RDA, some schemes leverage an extra RDMA read operation after RDMA write to force data to be persistent and integrated [6, 5, 15]. Undo logging, redo logging and copy-on-write (COW) are consistency mechanisms and have been widely used in persistent memory systems [25, 20, 21, 19, 8]. There also exist some RDMA-based NVM systems that ensure RDA by CPU involvement [32, 25, 36]. However, these solutions unfortunately fail to be efficient due to the following problems.

High Network Overheads. The schemes that leverage an extra RDMA read operation after RDMA write cause extra network round-trips for each RDMA write, resulting in high network overheads.

High CPU Consumption. Undo/redo logging and COW require the remote CPU to control operation sequence. However, CPU involvement decreases the benefits of using one-sided RDMA operations that don’t need the consumption of remote CPU when accessing the remote memory.

Double NVM Writes. Some CPU involvement solutions need to first check the written data in persistent log regions or buffers, and then apply them into the destination storage. These operations essentially require double NVM writes, consuming the limited NVM endurance. More NVM writes also cause higher latency than reads.

In order to address these problems, we propose Erda (Efficient Remote Data Atomicity) that is a zero-copy log-structured memory design. Erda guarantees RDA for one-sided RDMA writes to NVM without extra network round-trips, remote CPU consumption and double NVM writes. In Erda, an object with a CRC checksum inside is the basic unit of access operations. For the update operation from clients to servers, the metadata in a hash table are modified with an -byte atomic write, and then the object is directly transferred from clients to the destination storage at servers without redundant buffer and server CPUs, thus reducing the amount of write operations approximately by , compared with undo/redo logging and COW. The incompleteness of the written object will be detected by subsequent read requests via verifying checksums. Once the verification results show that the fetched object is incomplete, clients will re-read the previous version of the object, whose address information is also contained in the hash table, to ensure the consistency and atomicity of the fetched object. At the same time, servers are notified about the inconsistency and properly restore to a consistent version. Specifically, we have the following contributions:

RDA Solution with Low Overheads. We investigate the Remote Data Atomicity (RDA), and find that exiting RDMA-based NVM systems either overlook RDA or guarantee RDA at high overheads. Exiting solutions need high network overheads, high CPU consumption and double NVM Writes. Our proposed scheme is able to guarantee RDA and improve exiting RDA solutions with low overheads.

Cost-efficient Synergized Design. We propose a zero-copy log-structured memory design, named Erda, which guarantees RDA without extra network round-trips, remote CPU consumption and redundant copy. In Erda, we allow clients to directly write objects to the destination address at servers without buffer and copy. Subsequent read requests will detect the incompleteness of fetched objects without client-server coordination.

Evaluation and Open-Source Codes. We conduct experimental evaluation to exhibit the efficiency of Erda. Evaluation results demonstrate that compared with Read After Write and Redo Logging schemes, Erda significantly improves throughput and decreases latency, as well as reduces NVM writes approximately by . The source codes are released for public use at https://github.com/csXinxinLiu/Erda.

The rest of this paper is organized as follows. We present the background of RDMA networking and non-volatile memory in Section 2. Section 3 shows our design. Section 4 shows the implementation details of Erda. The experimental results are shown in Section 5. We present the related work in Section 6. Finally, we conclude this paper in Section 7.

2 Background

In this Section, we present the background of remote direct memory access (RDMA) and non-volatile memory (NVM).

2.1 RDMA Networking

Remote direct memory access (RDMA) bypasses kernel and supports zero memory copy, thus providing extremely high bandwidth and low latency for remote memory accesses [26]. RDMA has two kinds of primitives, i.e., one-sided and two-sided. One-sided primitives include RDMA read, RDMA write and atomic operations, which do not involve remote CPU when accessing the remote memory. Furthermore, two-sided primitives, such as RDMA send and RDMA recv, are similar to socket programming. Two-sided operations are served by the remote CPU, which must poll RDMA messages and process them.

RDMA is well-known for one-sided primitives, which provide higher bandwidth and lower latency than two-sided RDMA, especially when the remote server is busy [28, 18]. Furthermore, there is a trend that one-sided primitives are becoming more and more fast and scalable for recent generations of RNICs, e.g., ConnectX-4 and ConnectX-5 [28].

2.2 Non-Volatile Memory

Non-volatile memory (NVM) technologies, such as DXPoint [1] and PCM [29], have the strengths of non-volatility, byte-addressability, high density and DRAM-class latency. Hence, NVMs are promising candidates of next-generation main memory and caches [34], as well as complements to current external storages such as flash-based SSDs [17].

However, NVMs with different materials have some common limitations. First, NVMs have asymmetric properties of writes and reads. For example, NVM writes consume higher energy than reads, and also cause higher latency () than reads [35, 34, 38]. Second, NVMs generally suffer from limited write endurance [37, 34, 24]. Therefore, it is necessary to reduce the amount of write operations in NVM systems.

NVMs have the non-volatile capability. They keep the contents across crash or power failure. Therefore, NVM systems must consider consistency mechanism to avoid data corruption. It is well-recognized that the failure atomicity unit for NVM is bytes, because byte-addressable NVMs are accessible through the memory bus [16, 23, 33]. If the size of the updated data is larger than the -byte failure-atomic write granularity, existing mechanisms, such as undo/redo logging and copy-on-write (COW), are employed to maintain consistency [25, 20, 21, 19, 8]. Undo logging needs to append old data into an undo log first, and then updates in-place. Redo logging first appends new data in a redo log, and then updates the old data. COW creates a copy and then performs updates on the copy.

Figure 1: The overall architecture of Erda.

2.3 The Synergization of RDMA and NVM

In recent years, synergizing RDMA and NVM has become popular and important in order to obtain the salient features in these technologies. However, RDMA hardware does not support persistence guarantee for one-sided RDMA writes to NVM [32]. Therefore, if some data are transferred directly to remote NVM but part of the data are lost in the volatile NIC cache due to a failure, the data become incomplete and invalid.

Currently, providing persistence guarantee typically requires CPU participation or extra network round-trips [6]. We strive for guaranteeing the persistence and atomicity of remote direct access with low overheads.

3 Design

3.1 Erda Overview

Erda is a zero-copy log-structured memory design that supports write-optimized Remote Data Atomicity (RDA) under RDMA and NVM scenarios. Figure 1 shows the overall architecture of Erda. Specifically, data and metadata are persistent in a server’s NVM. Data are stored in a log-structured manner following an array of head nodes. The append-only log always maintains an old version of the updated data. Furthermore, a built-in checksum is used to verify the integrity of data without client-server coordination. Metadata stored in a hash table are used to index the data. We adopt a flexible flip bit and an -byte atomic write in metadata to avoid redundant NVM writes as well as guarantee the atomicity of metadata. Clients perform read/write requests to the server using RDMA networking. Clients directly write data to the destination address (the log region) at servers without buffer and copy. In the following, we respectively present the structures of data and metadata, the access workflow, the write-optimized design, consistency guarantee, read-write competition and a log cleaning scheme in details.

Figure 2: The object structure.
Figure 3: The structure of the deleted object.
Figure 4: The structure of a log region where objects are stored.

3.2 The Structures of Data and Metadata

3.2.1 The Structure of a Normal/Deleted Object

An object is the basic unit of one access and can be interpreted as a key-value pair with a checksum. As shown in Figure 2, an object consists of -bit delete tag, -bit CRC checksum and a key-value pair. Specifically, the delete tag indicates whether it is a normal object or a deleted one, which is shown in Figure 3. The -bit CRC checksum computed over the entire object is used to check the integrity and the validity of the object. The last data field stores the key-value pair.

As shown in Figure 3, since Erda is a log-structured approach which appends all updates in an append-only log, we need the structure of the deleted object to indicate that the object has been deleted. The deleted object consists of -bit delete tag (whose value is equal to ), -bit CRC checksum and the object key. We do not need to store the value in the structure of the deleted object, which also saves storage space.

3.2.2 The Structure of a Log Region

We store and manage objects in each server using a log-structured manner. Figure 4 shows the structure of a log region where objects are stored. Specifically, we use a head array of fixed addresses to link the log data, and the Head ID is used to distinguish different head nodes. Each head links a continuous memory region (such as GB), and the continuous region is divided into MB segments. For scalability, when a larger memory region is needed, we allocate and register another continuous GB memory region and link it to the first GB memory region following the same head, as shown in Figure 5.

Figure 5: Register memory for scalability.
Figure 6: The metadata in a hash table.

3.2.3 Metadata in a Hash Table

We adopt flat namespace in a hash table to lookup objects. As shown in Figure 6, the entries in the hash table store the metadata of objects. An entry corresponding to an object stores the object key, the head ID and an -byte atomic write region, including -bit new tag which indicates whether the following -bit data are “new” (the latest address information of the corresponding object) or “old” (the previous address information of the object), -bit new/old offset, -bit old/new offset and -bit reserved position for future use. All the information in this region is updated in an -byte atomic write.

Figure 7: The procedures of reading and writing objects using one-sided RDMA.

3.3 Data Access Mode using RDMA

Figure 7 shows the procedures of reading and writing data (objects) using one-sided RDMA. To allow RDMA operations from a client, the server registers the memory regions of the metadata hash table and the log regions with RNIC (RDMA enabled NIC). Subsequently, with the corresponding remote registration keys, the client can issue RDMA operations to these memory regions. It is worth noting that once the connection is established, the server will send the head array containing the corresponding relationships between head IDs and pointers to the client.

We first describe the procedure of RDMA reads from the client to the server. After a client and a server establish a connection, according to the requested object key, the client uses an RDMA read to directly read the corresponding entry of the hash table in the server. Then, after verifying the received object key, the client queries the local cached head array for the pointer corresponding to the received head ID. Finally, with the aid of the -byte atomic write region and the pointer, the client directly fetches the requested object using an RDMA read. When the client verifies the checksum of the object correctly, this RDMA read operation finishes.

For the procedure of RDMA writes from the client to the server, the client sends a write request to the server using RDMA write_with_imm, where the client’s identifier is attached in the immediate data field. Moreover, the server updates the corresponding entry of a hash table and then returns the last written address of the log that is maintained and updated by the server. With the returned information, the client posts one-sided RDMA writes to directly write data in the log region of the remote server without participation of the server’s CPU, and the server obtains higher processing capacity and removes redundant copy.

In the log region, the object does not span two segments. When an object exceeds the current MB segment, the server will change the last written address of the log to the beginning of the next MB segment. For scalability, as described in Section 3.2.2, when a larger memory region is needed, we allocate and register another continuous GB memory region and link it to the first GB memory region following the same head.

4 Implementation Details

4.1 Write-Optimized Design for NVM

The write-optimized design consists of two components:

Zero-Copy Memory Design. We implement a zero-copy log-structured memory design. All the data are transferred directly from the clients to the log region at servers via RDMA writes, and due to out-of-place updates, we do not need to put the data into some buffers like redo logging. However, this zero-copy design may bring some consistency issue such as partial write. The corresponding unique consistency detection and recovery are shown in Section 4.2.

Flexible Flip Bit. We adopt a flip bit, named “New Tag”, in a hash table to indicate whether the next region is the new or old offset, thus avoiding redundant NVM writes. When a server receives an update request, it locates the hash entry according to the hash value of the requested object key. The modification of the hash entry consists of two steps. First, flip the “New Tag”. Second, write the last written address of the log (i.e., the offset) to one of the -bit regions according to the “New Tag” to be written. If the “New Tag” to be written is , write the address to the first -bit region; otherwise, write the address to the second -bit region. As shown in the lower right part of Figure 7, the “New Tag” in the -byte atomic write region of metadata A is before being updated. Then, the server flips the “New Tag” from to and writes the address to the second -bit region. The part with unchanged contents will skip bit programming action and not be written using data-comparison write (DCW) [31].

4.2 Consistency Detection and Recovery

Erda is able to support consistency and atomicity of RDMA operations:

Out-of-Place Updates. We adopt a log-structured memory to prevent in-place updates, and always maintain an “old” version of the updated object (similar to an undo log).

CRC Checksum. We add a -bit CRC checksum over each object, so clients can detect the incompleteness of the fetched object by verifying the checksum. Once the fetched object is incomplete, the client can issue another RDMA read to fetch the previous version of the object.

-Byte Atomic Write. We leverage an -byte atomic write in the entry of a hash table, so the inconsistency will only occur when the metadata in the entry of a hash table have been atomically updated but a failure occurs before the object data have been fully written into the log. At the same time, the -byte atomic write also contains the address information for the “old” object version. When a failure occurs, the server can properly restore to a consistent “old” version.

Figure 8: If a failure occurs during a previous RDMA write operation, other clients detect the inconsistency when they access the incomplete object. These clients obtain a previous consistent version.

For example, if a client fails when it is updating object using RDMA write, the latest object in server’s log is incomplete. However, the server is unaware of the incomplete object due to the one-sided RDMA operation without involving the server’s CPU. As shown in Figure 8, if a client accesses object later, it will realize that the fetched object is incomplete by verifying the checksum. Then, the client can issue another RDMA read to fetch the previous version of object based on the old offset in the -byte atomic write region which has been already fetched. The client will also inform the server to update the corresponding entry in a hash table (replace the current new offset with the old offset). Thus, all subsequent accesses to the current object will be correct. Furthermore, once a failure in a server results in incomplete objects, the server needs to check objects in the last segment following each head and correspondingly update metadata in the hash table for consistency.

4.3 Read-Write Competition

After a client sends a write request to a server, the server will reserve the corresponding object storage region in the GB continuous memory region and update the last written address of the log. When the server receives another write request to write an object following the same head node, the server will return the updated last written address. A specific storage region of an object is only be written by one client, but each memory region is read by many clients. Thus, there is no write-write competition. However, the RDMA write may create read-write competition with concurrent RDMA reads by other clients.

When performing a write operation, the modification in the entry of a hash table is an atomic operation as described in Section 4.2

, and there are two read-write scenarios. First, when the server has modified a entry of a hash table atomically after receiving a write request from a client, but the client has not completed the object write, synchronous read operations from other clients find that the object is invalid for read by checking the checksum, or the object is a null value since the object being read has not been written yet. In these cases, the clients for reading choose to read a previous version of the requested object by using the old offset from the obtained entry of a hash table, or just wait a moment and try to read the same address again. Second, when a client has read the entry but not read the corresponding object, another client modifies the same entry and writes the updated object in the log at the same time. The read-write competition in this case does not lead to errors, because the update in our log-structured mechanism is an out-of-place update.

Figure 9: Log cleaning consists of two phases: log merging and replication.

4.4 Lock-Free Log Cleaning

Log cleaning reclaims free space of the append-only log by removing deleted objects and stale versions of objects for memory saving. A server performs log cleaning and handles read/write requests concurrently. As shown in Figure 9, log cleaning consists of two phases: log merging and replication. We use an example to illustrate the process of log cleaning.

Figure 10: The -byte atomic write region in metadata before log cleaning.
Figure 11: The -byte atomic write region in metadata during the log cleaning.
Figure 12: After completing log cleaning, Region becomes Region .
Figure 13: When a log cleaning process is completed, the server flips the new tag from to , which means the address information in Region is the latest version and will be accessed by clients.

When the occupied space following a head reaches a pre-defined threshold (Region in Figure 9) , the cleaner in a server will allocate another continuous GB memory region (Region in Figure 9), as well as inform all the connected clients that the objects following the head will experience log cleaning. After receiving the notification, clients can still read and write objects, but in different ways: clients issue read/write requests using RDMA send. Furthermore, in the -byte atomic write region of metadata, the server doesn’t flip the new tag. Based on the new tag, the previous “new offset region” now stores the address information of Region , and the “old offset region” stores the address information of Region , as shown in Figures 10 and 11. At the same time, the cleaner in the server starts log cleaning after going through maximum RTT and informing connected clients to avoid transmission delays. The cleaner also doesn’t flip the new tag, and merely updates the old offset region.

In the log merging phase, the cleaner starts the reverse scan from the last written address of the log at the beginning of log cleaning, since the object version of the later part of the log is newer than that of the previous part. For the object that is first encountered (representing the latest version in the merging region), the cleaner writes it to Region and updates the corresponding old offset region in the entry. Furthermore, when the cleaner encounters the same object (the stale version) again, it simply overlooks it. In addition, the deleted objects will be removed during the cleaning process. For read/write requests from clients, the server accesses the new offset region in Region .

When the reverse scan is completed, the log cleaning moves on to the replication phase. The cleaner in a server replicates objects that were written by clients after the start of the log merging phase into Region , and the server handles read/write requests concurrently. Specifically, for the write requests from clients, the server updates the old offset region in Region in the entry and appends the new object into Region . The replication region in Region is reserved for the cleaner. If the object to be replicated has already appeared in the following written region, the entry (the old offset region in Region ) will not be changed, since the offset is the latest version. For the read requests from clients, if the offset in Region in the corresponding entry is larger than the offset at the end of the reserved replication region, the server reads the address in the old offset region in Region (the latest version); otherwise, the server reads the address in the new offset region in Region , since some data in Region fail to be replicated into Region .

When all the objects that were written after the start of the log merging phase in Region are replicated into Region , the log cleaning process is completed. At this point, the server changes the pointer of the corresponding head from pointing to Region to Region , as shown in Figure 12. Then, the server flips the new tags in the hash tables of all the objects in Region (Figure 13), returns the new pointer to the connected clients and informs these clients that the log cleaning is finished. After that, clients return to the original ways of reading and writing objects.

5 Performance Evaluation and Analysis

We examine the performance of Erda in terms of multiple metrics, including the number of NVM writes, throughput and latency.

Figure 14: The latency of YCSB-C (100% read) with different value sizes of the key-value pair.
Figure 15: The latency of YCSB-B (95% read, 5% write) with different value sizes of the key-value pair.
Figure 16: The latency of YCSB-A (50% read, 50% write) with different value sizes of the key-value pair.
Figure 17: The latency of update-only workload (100% write) with different value sizes of the key-value pair.

5.1 Experimental Setup

Hardware and configurations. Our experiments run upon the servers, each of which contains two GHz Intel Xeon CPUs ( cores) and GB of DDR RAM. One server is also equipped with a Gbps Mellanox ConnectX-3 InfiniBand network adapter and runs on CentOS with the MLNX_OFED_LINUX- InfiniBand driver. As real NVM devices are not fully available, we adopt a well-recognized simulation method that adds extra write latency of DRAM to simulate NVM  [11, 30, 22, 13, 27]. By default, we add ns of extra write latencies [27].

Workloads. We use the YCSB benchmark [3]

to generate four workloads that follow Zipfian distribution of skewness

: () Read-only workload (YCSB-C) contains 100% read. () Read-mostly workload (YCSB-B) contains 95% read and 5% write. () Update-heavy workload (YCSB-A) contains 50% read and 50% write. () Update-only workload contains 100% write.

Comparisons. We compare Erda with two consistency schemes: Redo Logging (a CPU involvement scheme) [21, 20] and Read After Write (a network-dominant scheme) [6, 5]. For Redo Logging scheme, a client sends a write request to the redo log region of the server using RDMA send, and then the server verifies the integrity of the message in the redo log and applies the write request asynchronously to the destination storage. When a client issues an RDMA send to request an object value, the server first looks for the object in the redo log. If the requested object isn’t in the redo log, the server searches the destination address with the object key through a hash table, and then reads the object and returns it to the client. For Read After Write scheme, to write objects, a client first sends a request to a server and obtains the address to be written in the ring buffers. Moreover, the client uses RDMA write to push the object into the ring buffers, and issues an RDMA read following the RDMA write to force the object to be persistent and integrated into the ring buffers. The server CPU polls for these operations asynchronously from ring buffers and applies them to the destination storage. The procedure of read operations from the client to the server follows the operations of redo logging scheme.

Redo Logging, Read After Write and Erda leverage hopscotch hashing algorithm [10] to index objects. In the hopscotch hashing, a key-value pair locates in a small contiguous region of memory, while in cuckoo hashing [4], a key-value pair is in one of several disjoint regions [7].

5.2 Latency

Figure 18: The throughput of YCSB-C (100% read) with different thread numbers.
Figure 19: The throughput of YCSB-B (95% read, 5% write) with different thread numbers.
Figure 20: The throughput of YCSB-A (50% read, 50% write) with different thread numbers.
Figure 21: The throughput of update-only workload (100% write) with different thread numbers.

As shown in Figures 14 – 17, we compare the latency of Erda with those of Redo Logging and Read After Write using four YCSB workloads, as the value size of the key-value pair varies from Bytes to Bytes. In Figures 14 and 15 where read operations dominate the workloads, Erda performs much better than Redo Logging and Read After Write. In Redo Logging and Read After Write, clients send read requests to a server using two-sided RDMA send. However, when receiving the requests, the server needs to first identify the requested object in the redo log. If the object isn’t in the log, the server reads the object from the destination address, and then returns the object to clients. In Erda, clients use two one-sided RDMA reads to perform read operations (one for the corresponding entry of the hash table in the server, and the other for directly fetching the requested object) without the CPU involvements of servers. Specifically, the average latency of YCSB-C (100% read) for Erda, Redo Logging and Read After Write are s, s and s, respectively. The average latencies of YCSB-B (95% read, 5% write) for Erda, Redo Logging and Read After Write are s, s and s, respectively. Figure 16 shows the latency of YCSB-A (50% read, 50% write) with different value sizes of the key-value pair. The corresponding average latencies for Erda, Redo Logging and Read After Write are s, s and s, respectively. For update-only workload (100% write) shown in Figure 17, Erda still outperforms the other two schemes, although the benefits of using Erda with the update-only workload are less than that with other three workloads. Specifically, the average latencies of update-only workload for Erda, Redo Logging and Read After Write are s, s and s, respectively.

5.3 Throughput

Figures 18 – 21 show the throughputs of Erda, Redo Logging and Read After Write with four YCSB workloads and different thread numbers, respectively. From Figure 18, we observe that the throughput of Erda grows approximately linearly with the increasing thread numbers, while Redo Logging and Read After Write don’t. The main reason is that YCSB-C is a read-only workload, and the read procedure of Erda from clients to servers does not involve server CPUs by using two one-sided RDMA reads, while the read procedures of Redo Logging and Read After Write need CPU involvements. Hence the throughput of Erda is not affected by CPU consumption as the number of threads increases. Specifically, the average throughputs of YCSB-C for Erda, Redo Logging and Read After Write are KOp/s, KOp/s and KOp/s, respectively. As shown in Figure 19, the average throughput of YCSB-B for Erda, Redo Logging and Read After Write are KOp/s, KOp/s and KOp/s, respectively. For YCSB-A workload shown in Figure 20, the average throughput of Erda, Redo Logging and Read After Write are KOp/s, KOp/s and KOp/s, respectively. However, for update-only workload (100% write) shown in Figure 21, the average throughputs of Erda, Redo Logging and Read After Write are approximate.

5.4 CPU Utilization

Figure 22: The normalized CPU cost when the value size of the key-value pair is Bytes.
Figure 23: The normalized CPU cost when the value size of the key-value pair is Bytes.
Figure 24: The normalized CPU cost when the value size of the key-value pair is Bytes.
Figure 25: The normalized CPU cost when the value size of the key-value pair is Bytes.

We use the “top” command in Linux to measure CPU utilization, and show the results of the normalized CPU costs with different workloads and value sizes of the key-value pair in Figures 22 – 25. For YCSB-C workload (100% read), the CPU cost of Erda is since the read procedure of Erda does not involve server CPUs. Hence the normalized CPU costs of both Redo Logging and Read After Write are positive infinity. Due to the same reason, for YCSB-B workload (95% read), the normalized CPU costs of both Redo Logging and Read After Write are much higher than that of Erda. Specifically, the normalized CPU costs of Redo Logging and Read After Write for YCSB-B workload are on average and times higher than the cost of Erda, respectively. For YCSB-A workload (50% read and 50% write), the normalized CPU costs of Redo Logging and Read After Write are on average and times. However, for update-only workload (100% write), the benefits of using Erda are relatively small compared to those obtained with other three workloads. The normalized CPU costs of Redo Logging and Read After Write with update-only workload are on average and times.

5.5 Log Cleaning

Figure 26: The average latencies with four YCSB workloads when the value size of the key-value pair is Bytes.

As described in Section 4.4, a server performs log cleaning and handles read/write requests concurrently. We evaluate the impact of log cleaning on the concurrent read/write requests. Figure 26 shows the average latencies of read/write requests under the normal cases of Erda and that of read/write requests during log cleaning, respectively. We use four YCSB workloads, and the value size of the key-value pair is Bytes. From Figure 26, we observe that the highest average latency of read/write requests during log cleaning comes from using update-only workload. However, for update-only workload, the average latency during log cleaning is approximate to that of read/write requests under the normal cases of Erda. For YCSB-C workload (100% read), the average latency of read/write requests during log cleaning is worse than that of read/write requests under the normal cases of Erda. The main reason is that the read procedure of Erda does not involve server CPUs with one-sided RDMA read, while the read procedure during log cleaning uses two-sided RDMA send (similar to Redo Logging and Read After Write).

NVM Writes
(Bytes)
Create Update Delete
Erda
Size(key)+
10+N
9+N Size(key)+9
Redo Logging
Size(key)+
12+2N
4+2N Size(key)+8
Read After Write
Size(key)+
12+2N
4+2N Size(key)+8
Table 1: The Number of NVM writes in different operations. is the size of one key-value pair. Size(key) is the size of the key.

5.6 The Number of NVM Writes

Table 1 shows the number of NVM writes in create, update and delete operations. is the size of one key-value pair. Size(key) is the size of the key. In Erda, one create operation needs to first write metadata in an entry of a hash table in a server. Specifically, the server writes an object key, a head ID (Byte), a new tag and an offset (Bytes) that belong to an -byte atomic write region in metadata. Then a client directly writes an object (Bytes+) in a log region. Therefore, the number of NVM writes is Size(key)+Bytes+. For an update operation in Erda, the server rewrites a new tag and an offset (Bytes) in metadata, and then a client writes the updated object (Bytes+) in a log region. Therefore, the number of NVM writes is Bytes+. A delete operation in Erda is similar to an update, except that a delete object to be written in a log region is Bytes+Size(key). Therefore, the number of NVM writes is Size(key)+Bytes.

The number of NVM Writes in Redo Logging and Read After Write are the same. For a create operation, a server writes the metadata in a hash table with a key and an address (Bytes). Then a key-value pair and a CRC checksum (Bytes) are written in the ring buffers (Read After Write) or redo log regions (Redo Logging). At last, the server verifies the integrity of the key-value pair, and then writes the key-value pair to the destination address. Therefore, the number of NVM writes is Size(key)+Bytes+. For an update operation, a server does not update the metadata in a hash table. The procedure of writing a key-value pair to the destination address follows the operations of the create. Therefore, the number of NVM writes is Bytes+. For a delete operation, a server sets the metadata in a hash table to , but does not write data on the destination address of a key-value pair. Therefore, the number of NVM writes is Size(key)+Bytes.

In summary, compared with Redo Logging and Read After Write, Erda reduces NVM writes approximately by , while significantly decreasing latency, improving throughput and reducing CPU consumption.

6 Related Work

Consistency guarantee for RDMA-based NVM. Currently, providing persistence and consistency guarantees for RDMA writes to NVM typically requires extra network round-trips or CPU participation [32, 6]. For example, a general method for providing these guarantees is to follow RDMA write(s) with an RDMA read to force client data to Asynchronous DRAM Refresh (ADR) domain, or to follow RDMA write(s) with an RDMA send to obtain local callback and persistency [6, 5]. HyperLoop [15] offloads replicated transactions to RDMA NICs by programming RNICs in multi-tenant storage systems, with NVM as a storage medium. This paper designs a new RDMA FLUSH (gFLUSH) primitive to support the durability at the NIC-level. However, gFLUSH essentially leverages an extra RDMA read operation, thus increasing network round-trips. Moreover, HyperLoop doesn’t consider the NVM lifetime (the write operations are not optimized). If a failure occurs during the transaction, this transaction will be abandoned without recovery. Orion [32], a distributed file system for NVMM-based storage, ensures persistence by CPU involvement, thus providing remote data atomicity. DSPM [25] proposes a kernel-level distributed persistent memory system that integrates distributed memory caching and data replication techniques. DSPM guarantees crash consistency both within a single node and across distributed nodes with CPU involvement. Mojim [36] uses a primary-backup protocol to replicate PM data through two-sided RDMA. It provides consistency and durability guarantees with CPU participation. Unlike existing schemes, we provide persistence and consistency guarantees for one-sided RDMA writes to NVM without extra network round-trips or remote CPU consumption. Moreover, compared with existing consistency mechanisms such as undo/redo logging and copy-on-write, we also reduce the NVM writes.

System Optimizations for RDMA-based NVM. Octopus [18] is RDMA-enabled persistent memory system, which proposes the local logging with remote in-place update for crash consistency. However, the solution fails to ensure remote data atomicity. Specifically, in Collect-Dispatch transaction of Octopus, a coordinator uses one-sided RDMA write to update the write sets in participants, and thus participants are unaware of the incomplete data without the CPU involvements of participants. Persistence Parallelism Optimization [12] improves the parallelism of maintaining the orders for write requests in the memory bus and the RDMA network. NVFS [14] is an optimized HDFS with NVM and RDMA. It re-designs HDFS I/O with memory semantics to exploit the byte-addressability of NVM. ScaleRPC [2] is an efficient RPC primitive to alleviate resource contention and achieve high scalability. It introduces connection grouping and virtualizes the mapping with one-sided RDMA verbs on RC (reliable connection). However, these RDMA-based NVM systems do not provide solutions for remote data atomicity. Unlike them, our proposed Erda is to guarantee remote data atomicity without extra network round-trips, remote CPU consumption and redundant copy.

7 Conclusion

In order to address the problems of high network overheads, high CPU consumption and double NVM writes when ensuring remote data atomicity under RDMA and NVM scenarios, we propose a zero-copy log-structured memory design, called Erda. Erda guarantees remote data atomicity without extra network round-trips, remote CPU consumption and redundant copy. It transfers data directly to the destination address without buffer and copy, and guarantees consistency and atomicity by leveraging out-of-place updates, CRC checksum and -byte atomic write. Evaluation results demonstrate that Erda reduces NVM writes approximately by , as well as significantly reduces CPU cost, decreases latency and improves throughput. We have released the source codes for public use at https://github.com/csXinxinLiu/Erda.

References

  • [1] 3d xpoint. http://bit.ly/2WLVTZT.
  • [2] Y. Chen, Y. Lu, and J. Shu. Scalable rdma rpc on reliable connection with efficient resource sharing. In EuroSys, 2019.
  • [3] B. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears. Benchmarking cloud serving systems with ycsb. In SoCC, 2010.
  • [4] Cuckoo hashing. http://bit.ly/2ZwUTua.
  • [5] C. Douglas. RDMA with byte-addressable PM, RDMA Write Semantics to Remote Persistent Memory, An Intel Perspective when utilizing Intel HW. http://bit.ly/2J3fgsH.
  • [6] C. Douglas. RDMA with PMEM, Software mechanisms for enabling access to remote persistent memory. http://bit.ly/2N4sOsO.
  • [7] A. Dragojević, D. Narayanan, M. Castro, and O. Hodson. Farm: Fast remote memory. In NSDI, 2014.
  • [8] S. R. Dulloor, S. Kumar, A. Keshavamurthy, P. Lantz, D. Reddy, R. Sankaran, and J. Jackson. System software for persistent memory. In EuroSys, 2014.
  • [9] C. Guo, H. Wu, Z. Deng, G. Soni, J. Ye, J. Padhye, and M. Lipshteyn. Rdma over commodity ethernet at scale. In SIGCOMM, 2016.
  • [10] Hopscotch hashing. http://bit.ly/2IOagrt.
  • [11] Q. Hu, J. Ren, A. Badam, J. Shu, and T. Moscibroda. Log-structured non-volatile main memory. In USENIX ATC, 2017.
  • [12] X. Hu, M. Ogleari, J. Zhao, S. Li, A. Basak, and Y. Xie. Persistence parallelism optimization: A holistic approach from memory bus to rdma network. In MICRO, 2018.
  • [13] J. Huang, K. Schwan, and M. K. Qureshi. Nvram-aware logging in transaction systems. Proceedings of the VLDB Endowment, 8(4):389–400, 2014.
  • [14] N. S. Islam, M. Wasi-ur Rahman, X. Lu, and D. K. Panda. High performance design for hdfs with byte-addressability of nvm and rdma. In ICS, 2016.
  • [15] D. Kim, A. Memaripour, A. Badam, Y. Zhu, H. H. Liu, J. Padhye, S. Raindel, S. Swanson, V. Sekar, and S. Seshan. Hyperloop: group-based nic-offloading to accelerate replicated transactions in multi-tenant storage systems. In SIGCOMM, 2018.
  • [16] S. K. Lee, K. H. Lim, H. Song, B. Nam, and S. H. Noh. WORT: Write optimal radix tree for persistent memory storage systems. In FAST, 2017.
  • [17] D. Liu, K. Zhong, T. Wang, Y. Wang, Z. Shao, E. H.-M. Sha, and J. Xue. Durable address translation in pcm-based flash storage systems. IEEE Transactions on Parallel and Distributed Systems, 28(2):475–490, 2017.
  • [18] Y. Lu, J. Shu, Y. Chen, and T. Li. Octopus: an rdma-enabled distributed persistent memory file system. In USENIX ATC, 2017.
  • [19] M. Nam, H. Cha, Y.-r. Choi, S. H. Noh, and B. Nam. Write-optimized dynamic hashing for persistent memory. In FAST, 2019.
  • [20] T. Nguyen and D. Wentzlaff. Picl: A software-transparent, persistent cache log for nonvolatile main memory. In MICRO, 2018.
  • [21] M. A. Ogleari, E. L. Miller, and J. Zhao. Steal but no force: Efficient hardware undo+ redo logging for persistent memory systems. In HPCA, 2018.
  • [22] J. Ou, J. Shu, and Y. Lu. A high performance file system for non-volatile main memory. In EuroSys, 2016.
  • [23] I. Oukid, J. Lasperas, A. Nica, T. Willhalm, and W. Lehner. Fptree: A hybrid scm-dram persistent and concurrent b-tree for storage class memory. In SIGMOD, 2016.
  • [24] M. K. Qureshi, J. Karidis, M. Franceschini, V. Srinivasan, L. Lastras, and B. Abali. Enhancing lifetime and security of pcm-based main memory with start-gap wear leveling. In MICRO, 2009.
  • [25] Y. Shan, S.-Y. Tsai, and Y. Zhang. Distributed shared persistent memory. In SOCC, 2017.
  • [26] S.-Y. Tsai and Y. Zhang. Lite kernel rdma support for datacenter applications. In SOSP, 2017.
  • [27] H. Volos, A. J. Tack, and M. M. Swift. Mnemosyne: Lightweight persistent memory. In ASPLOS, 2011.
  • [28] X. Wei, Z. Dong, R. Chen, and H. Chen. Deconstructing rdma-enabled distributed transactions: Hybrid is better! In OSDI, 2018.
  • [29] H.-S. P. Wong, S. Raoux, S. Kim, J. Liang, J. P. Reifenberg, B. Rajendran, M. Asheghi, and K. E. Goodson. Phase change memory. Proceedings of the IEEE, 98(12):2201–2227, 2010.
  • [30] F. Xia, D. Jiang, J. Xiong, and N. Sun. Hikv: a hybrid index key-value store for dram-nvm memory systems. In USENIX ATC, 2017.
  • [31] B.-D. Yang, J.-E. Lee, J.-S. Kim, J. Cho, S.-Y. Lee, and B.-G. Yu. A low power phase-change random access memory using a data-comparison write scheme. In ISCAS, 2007.
  • [32] J. Yang, J. Izraelevitz, and S. Swanson. Orion: A distributed file system for non-volatile main memory and rdma-capable networks. In FAST, 2019.
  • [33] J. Yang, Q. Wei, C. Chen, C. Wang, K. L. Yong, and B. He. Nv-tree: reducing consistency cost for nvm-based single level systems. In FAST, 2015.
  • [34] J. J. Yang, D. B. Strukov, and D. R. Stewart. Memristive devices for computing. Nature nanotechnology, 8(1):13, 2013.
  • [35] J. Yue and Y. Zhu. Accelerating write by exploiting pcm asymmetries. In HPCA, 2013.
  • [36] Y. Zhang, J. Yang, A. Memaripour, and S. Swanson. Mojim: A reliable and highly-available non-volatile memory system. In ASPLOS, 2015.
  • [37] P. Zhou, B. Zhao, J. Yang, and Y. Zhang. A durable and energy efficient main memory using phase change memory technology. In ACM SIGARCH computer architecture news, volume 37, pages 14–23. ACM, 2009.
  • [38] P. Zuo and Y. Hua. A write-friendly and cache-optimized hashing scheme for non-volatile memory systems. IEEE Transactions on Parallel and Distributed Systems, 29(5):985–998, 2018.
  • [39] P. Zuo, Y. Hua, and J. Wu. Write-optimized and high-performance hashing index scheme for persistent memory. In OSDI, 2018.