DiPETrans: A Framework for Distributed Parallel Execution of Transactions of Blocks in Blockchain

In most of the modern day blockchain, transactions are executed serially by both miners and validators; also, PoW is determined serially. The serial execution limits the system throughput and increases transaction acceptance latency, even unable to exploit the modern multi-core resources efficiently. In this work, we try to increase the throughput by introducing parallel execution of the transactions using a static analysis based technique. We propose a framework DiPETrans for the distributed parallel execution of block transactions. In DiPETrans, trusted peers in the blockchain network form a community to execute the transactions and to find the PoW parallelly. The community follows a master-slave approach for parallel execution. The core idea is that master analyzes the transactions using a static analysis based technique, creates different groups (shards) of non-conflicting transactions, and distribute shards to workers (community members) to execute them parallelly. After transaction execution, communities compute power is utilized to find PoW parallelly. On successful creation of a block, the master broadcasts the proposed block to other peers in the network for validation. On receiving a block, validators re-executes the block transactions, either parallelly (community) or serially (solo validators). If they reach the same state as shared by the miner, then accept the block otherwise reject. We performed experiments with historical data from Ethereum blockchain and achieved linear speedup for transaction execution and end-to-end block creation with up to 5 workers in the community. We experimented by varying the number of transactions per block from 100 to 500 and obtained a maximum speedup of 2.18X for miner, 2.04X for info sharing validator, and 2.02X for no info sharing validator. DiPETrans is first of its kind, and we will be evolving it to provide better performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

03/14/2022

Block-STM: Scaling Blockchain Execution by Turning Ordering Curse to a Performance Blessing

Block-STM is a parallel execution engine for smart contracts, built arou...
03/31/2019

Achieving Greater Concurrency in Execution of Smart Contracts using Object Semantics

Popular blockchain such as Ethereum and several others execute complex t...
03/11/2020

Scaling Hyperledger Fabric Using Pipelined Execution and Sparse Peers

Many proofs of concept blockchain applications built using Hyperledger F...
12/13/2018

Processing Transactions in a Predefined Order

In this paper we provide a high performance solution to the problem of c...
01/11/2022

Utilizing Parallelism in Smart Contracts on Decentralized Blockchains by Taming Application-Inherent Conflicts

Traditional public blockchain systems typically had very limited transac...
05/28/2020

The Ritva Blockchain: Enabling Confidential Transactions at Scale

The distributed ledger technology has been widely hailed as the break-th...
11/26/2019

Index-Based Scheduling for Parallel State Machine Replication

State Machine Replication (SMR) is a fundamental approach to designing s...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

According to a report published by World Economic Forum (WEF) based on a survey on the future of the blockchain technology predicted that of the Global Gross Domestic Product (GDP) would be stored on the blockchain by [1]. Furthermore, with an annual growth rate of , the capital market of billion in 2016 may touch the billion by 2025 [2, 3]. Many well-known information technology vendors, governments across the world, Internet giants, and banks are investing in blockchain to accelerate research to make distributed ledger technology versatile [2].

A blockchain is a distributed decentralized database which is a secure, tamper-proof, publicly accessible collection of the records organized as a chain of the blocks [4, 5, 6, 7, 8]. It maintains a distributed global state of the transactions in the absence of a trusted central authority. Due to its usefulness, it has gained wide interest both in industry and academia.

A blockchain consists of nodes or peers maintained in a peer-to-peer manner. The nodes of the network are known as a miner or validator. Miner when proposing a block to be added to the blockchain, while known as validator when validating a block proposed by the miner. A block consists of a set of transactions, timestamp, block id, nonce, coin base address (miner address), previous block hash, its hash, and other relevant information. Essentially, miner proposes a block and rest all peers of the network validate that block. Later based on majority consensus block is added to the blockchain. Normally, the entire copy of the blockchain is stored on all the nodes of the system. Clients (also known as users) external to the system use the services of the blockchain by sending requests to the nodes of the blockchain systems.

Bitcoin [4], the first blockchain system proposed (by Satoshi Nakamoto), is the most popular blockchain system till date. It is a cryptocurrency system which is highly secure where the users need not trust anyone. Further, there is no central controlling agency like current day banking system. Ethereum [7] is another popular blockchain currently in use and provides various other services apart from cryptocurrencies. As compared to Bitcoin, Ethereum provides support for user-defined programs (scripts), called smart contracts [9] to offer complex services.

Smart contracts in the Ethereum are written in Turing complete language Solidity [9]. These contracts automatically provide several complex services using pre-recorded terms and conditions in the contract without the intervention of a trusted third party. In general, a smart contract is like an object in object-oriented programming languages, which consists of methods and data (state) [5, 9]. A client request, aka transaction, consists of these methods and corresponding input parameters. A transaction can be initiated by a client (user) or a peer and executed by all the peers.

A client wishing to use the services of the blockchain, contacts a node of the system and send the request, which is a transaction. A node on receiving several such transactions from different clients or other peers, forms a block to be added to the blockchain. The node is said to be a block-producer and cryptocurrencies based blockchains such as Bitcoin, Ethereum, block-producers are called as miners as they can ‘mine’ new coins.

Drawback with Existing System: The drawback with the existing blockchain system is that miners and validators execute the transactions serially and so is computing the PoW. Further, finding the PoW [10] is highly computationally intensive random process which requires a lot of computation to create a new block. PoW is required to delay-wait the network to reach consensus and allow miners to create a new block in their favor. But the problem is that the high computation requirements make it very difficult for a resource constraint miner to compete for the block creation to get intensive [11].

Also, in the current era of distributed and multi-core system, sequential transaction execution results in poor throughput. Dickerson et al. [5] observed that the transactions are executed serially in two different contexts. First, they are executed serially by the miner while creating a block. Later, validator re-executes the transactions serially to validate the block. Serial execution of the transaction leads to poor throughput. Also, block transactions are executed several times by the miners and many many times by the validators.

In addition to these problems, due to the substantial high transaction fee, poor throughput (transactions/second), significant transaction acceptance latency, and limited computation capacities prevent widespread adoption [12, 13]. Hence adding parallelism to the blockchain can achieve better efficiency and higher throughput.

Solution Approach: There are few solutions proposed and used in Bitcoin and Ethereum blockchain to mitigate these issues. One such solution is that several resource constraint miners form a pool (community) known as mining pool and determines the PoW, and after block acceptance, they share the incentive among them [11, 14, 15, 16, 17, 18, 19]. In the rest of the paper, we use pool and community interchangeably.

Other solutions [5, 20, 21, 22] suggest concurrent execution of the transactions at runtime. These are done in two stages: first, while proposing the block, and second while validating the block. This helps in achieving better speedup in creating the block and its acceptance, and hence increase the chance of a miner to receive their fees. However, it is not straight forward and requires a proper strategy so that a valid block should not be rejected due to false block rejection or FBR error [22]. These techniques follow Software Transactional Memory based runtime techniques to execute transactions concurrently. Miner concurrently executes the block transactions and construct the block graph alongside. The block graph is used to record the dependencies between the transactions where vertices depict the transactions and edges between them stores the dependencies. In the end, miner adds a block graph in block to help the validator to execute transaction concurrently and to avoid FBR error. During the concurrent execution at validator, the FBR error may easily occur if transactions dependencies are not recorded appropriately in the block graph.

In this work, our objective is to execute the transactions in parallel based on a static analysis of the transactions, using a community of trusted nodes (workers) modeled as a master and slaves, along with finding the PoW in parallel, to improve the performance of block creation and validation. We proposed a framework for distributed parallel execution in a formal setting inspired by Ethereum [7]. We follow a static analysis based sharding technique to determine the transaction dependencies. Hence, FBR error will not occur if the validator does not use information shared by the miner as dependencies remain the same in the statical analysis. A validator can perform the static analysis before parallel execution, which makes it very straightforward to adopt.

Unrelated transactions of the block are grouped into different shards (see Fig. 2), and shards are assigned to different nodes of the community (see Fig. 1), which the nodes execute independently in parallel. To our knowledge, this is the first work which uses static analysis for identifying block transactions that can be executed in parallel, and combines these with benefits associated with sharding and mining pools.

The major contributions of this paper are as follows:

  • We propose a framework DiPETrans for parallel execution of the transactions at miners and validators based on transaction sharding. This work is the first contribution which uses static analysis based transaction sharding to execute the transactions of the blocks in mining pools parallelly.

  • We implemented two different approaches for the validator. In the first approach known as Sharing Validator, the miner incorporates some information about the dependent transactions (shards) in the block to help the validators to execute the block transactions deterministically and in parallel. While in the second Default Validator, the miner does not include dependency information in the block, and validators infer the dependencies themselves.

  • We report experiments using 5,170,597 transactions from the Ethereum blockchain and execute them using our DiPETrans framework to empirically validate the benefits of our techniques over traditional sequential execution. We achieved a maximum speedup of for the miner, for the validator, without information sharing (i.e., Default Validator), and for with information sharing (i.e., Sharing Validator) for 100 to 500 transactions per block, when using 6 workers (including master) in the community.

The rest of the paper is organized as follows: We present the background and related work in Section II. While in Section III, we describe the proposed DiPETrans architecture and methodology Section IV consist of the experimental evaluation and results. Finally, we conclude with some future research directions in Section V.

Ii Background and Related Work

This section presents the background and overview of existing techniques for parallel execution of the transactions. We first introduce the working of blockchain technology and then summaries the work on concurrent execution of the smart contract transactions. Then, we present the work on mining pools and sharding techniques closest in spirit to this work.

Background: In most of the popular blockchain systems such as Bitcoin and Ethereum, transactions in a block are executed in an ordered manner first by the miners later by the validators [5]. When a miner creates a block, the miners typically choose transactions to form a block from a pool based on their preference, e.g., giving higher priority to the transactions with higher fees. After selecting the transactions the miner (1) serially executes the transactions, (2) adds the final state of the system to the block after execution, (3) next find the proof-of-work (PoW) [10] and (4) broadcast the block in the network to other peers for validation to earn the reward. PoW is an answer to a mathematical puzzle in which miner tries to find the hash of the block smaller than the given difficulty. This technique is used in the Bitcoin and Ethereum.

Later after receiving a block, a node validates the contents of the block. Such a node is called the validator. Thus when a node is block-producer, every other node in the system acts as a validator. Similarly, when another node is the miner, then acts as a validator. The validators (1) re-execute the transactions in the block received serially, (2) verify to see if the final state computed by them is same as the final state provided by the miner in the block, and (3) also validates if the miner solved the puzzle (PoW) correctly. The transaction re-execution by the validators is done serially in the same order as proposed by the miner to attain the consensus [5]. After validation, if the block is accepted by the majority (accepted by more than 50%) of the validators, then the block is added to the blockchain, and the miner gets the incentive (in case Bitcoin and Ethereum).

Further, blockchain is designed such a way that it forces a chronological order between the blocks. Every block which is added to the chain depends on the cryptographic hash of the previous block. This ordering based on cryptographic hash makes it exponentially challenging to make any change to the previous blocks. In order to make any small change to already accepted transactions or a block stored in the blockchain requires recalculation of PoW of all the subsequent blocks and acceptance by the majority of the peers in the network. Also, if two blocks are proposed at the same time and added to the chain, they form a fork. To resolve the forks, the branch with the longest chain is considered as the final. This allows mining to be secure and maintain a global consensus based on processing power.

Related Work: Since the launch of Bitcoin in 2008 by Satoshi Nakamoto [4] the blockchain technology has received immense attention from research communities both from academia and industries. Blockchain is introduced as a digital distributed decentralized system of cryptocurrency currency. However, decentralized digital currencies have been introduced long back before Bitcoin in eCash [23] and later, the peer-to-peer currency in [24, 25]. But none of them concentrated on having a common global log stored at every peer. Bitcoin introduces this concept as distributed decentralized highly secure ledger as blockchain technology. Later Ethereum [7] started using smart contracts (a user-defined computer program) in the blockchain. Which further enhanced the potential of blockchain technology from being a ledger technology to more generalized technology for anything of values. Due to the use of smart contracts in the blockchain, it becomes versatile and adopted for many applications such as digital identity, energy market, supply-chain, healthcare, real estate, asset tokenization, etc. Following that many blockchain platforms have been developed based on smart contracts such as EOS [12], Hyperledger [8], MultiChain [26], OmniLedger [27], and RapidChain [28].

However, due to the substantially high transaction fee, poor throughput, high latency, and limited computation capacities prevent widespread adoption of the public blockchain [12]. Also, most of the existing blockchain platform executes the transaction serially one after another and fails to exploit the current day multi-core resources [5, 20]. Therefore to improve the throughput and to utilize concurrency available with multi-core systems, researchers developed the solutions to execute the transaction parallelly.

For the concurrent execution of the smart contract, Dickerson et al. [5] and Anjana et al. [20, 22] proposed Software Transactional Memory based multi-threaded approaches. They achieved better speedup over serial execution of the transactions; however, their techniques are also based on the assumption of non-nested transaction calls. Saraph et al. [6] performed an empirical analysis and exploited simple speculative concurrency in Ethereum smart contracts, for this, they grouped the transactions of a block into two groups one consist of non-conflicting transactions that can be executed parallelly while other consists of conflicting transactions and executed serially. They proposed a lock based technique to avoid the inconsistency that may occur due to concurrent execution. In another contribution, Zang et al. [21] proposed a Multi-Version Timestamp Ordering Concurrency Control based concurrent validator. In their approach, the miner can use any concurrency control protocol and generates the read-write set to help the validators to execute the contract transaction of a block concurrently. In [29], Bartoletti et al. presented a statical analysis based theoretical perspective of concurrent execution of the smart contract transactions.

Distributed mining pools in Bitcoin and Ethereum network make use of distributed compute power to find the PoW parallelly and share the incentive based on the pre-agreed mechanism (proportional, pay-per-share or pay-per-last-N-shares, etc.) [15, 16, 17]. The distributed mining pool based centralized and decentralized solutions are practically implemented and utilized for determining PoW in both Bitcoin and Ethereum. In Bitcoin network, approx 95 percent mining power resides with less than 10 mining pools while in Ethereum network, roughly 80 percent mining power held by 6 mining pools [30]. In distributed mining pools, computation constraint miner participates in the mining to earn the rewards which they can’t achieve independently.

Furthermore, in sharding [31] based techniques either overall system is partitioned into smaller equally sized committees or data (blockchain) is partitioned in such a way that a new node need not to validate entire chain instead validate only specific block from the chain. In EOS blockchain [12], concept of sharding is proposed to be implemented for parallel execution of the transaction. The static analysis of the block can be performed to determine the non-conflicting transactions in a block [29, 31, 32]. The two transactions that do not modify the common data item (account) grouped into different shards and different shards of the block executed parallelly. So transactions of a block belong to different shards executed concurrently using multiple threads first by the miner, later during validation by the validators. EOS [12], OmniLedger [27], and RapidChain [28] proposed to provided optimization using sharding technique.

In this work, the idea of the mining pools is taken up to parallelize the transactions execution, mining, and validation. Essentially, we proposed to use these two concepts, static analysis based transaction sharding, and distributed mining pool to execute the transactions parallelly along with parallel PoW computation.

Iii DiPETrans Architecture

Fig. 1: Overview of the DiPETrans Architecture and Functions

This section presents the proposed DiPETrans framework. We first introduce a high-level overview of the architecture that gives the functionalities of the miner and the validator. Following that, the master-slave approach of a mining community is illustrated. Finally, the algorithms for static analysis of transactions and distributed mining are explained.

Iii-a DiPETrans Architecture

The architecture of the DiPETrans framework is shown in Fig. 1. There are different mining communities, such as Community 1 to 4 (Fig. 1,

2
). Each community is a set of devices (workers) which use their distributed compute power collaboratively to execute transactions and solve the PoW in parallel for a block. Workers in a community trust each other. As with existing mining pools, all workers in a community that participates in the parallel mining of a block get a part of the incentive fee, based on pre-agreed conditions.

One of the workers in the community is identified as the Master while the others are Slaves. The master serves as the peer that represents the community with the blockchain network for all operations. When a user submits their request (Fig. 1,

1
) to one of the peers in the blockchain network, the transaction will be broadcast to all peers in the network, including the master of each community, and be placed in their pending transaction queue (Fig. 1,

3
). Then all the miners in the network compete to form the next block from these transactions.

Iii-A1 Functions of the Miner

We followed a master-slave model within the community, and the master node is responsible for coordinating the overall functionality of the community (Fig. 1

4
). The master can be selected based on a leader election algorithm or on some other approach. We assumed that no workers fail within the community. When the community acts as a miner to create new blocks, there are two phases: one is transaction execution (

i
), and the second is solving the PoW (

ii
). Both of these are parallelized. When the community acts as a validator, it only executes the first phase of executing transactions to validate them. In the miner’s first phase, the master selects the transactions from the pending transaction queue of the community (

3
) to construct a block (

5
). Then, it identifies the independent transactions by performing a static analysis of the transactions (discussed later in Alg. 1). It groups dependents transactions into a single shard, and independent ones across different shards (

6
). The master then sends the shards to the slaves, along with the current state of the accounts (stateful variables) accessed by those transactions (

7
,

8
).

On receiving a shard from the master, the slave worker executes the transactions present in its shard(s) serially (

7
), computes the new state for the accounts locally, and sends the results back to the master. While transactions across shards are independent, those within a shard will have dependencies (Fig. 2), and hence are executed sequentially. To improve the throughput further, one can perform concurrent execution of transactions within a shard based on Software Transactional Memory (STM) to leverage multi-processing on a single device [5, 20, 22]. This is left as a future extension. Once all slaves complete execution of the shards assigned to them, the master computes the final state of the block from the local states returned by the slaves.

In the PoW phase (

9
,

ii
) of block creation, the master sends the block (header along with transactions), and different nonce ranges to the slaves to find the block hash that is smaller than the required difficulty. This is a brute-force iteration over different possible values of the nonce unit a match is found, and forms the PoW. Different slaves operate on different ranges of values in parallel to find the block hash. When a slave finds the correct hash, it informs the master. The master node then notifies the remaining workers to terminate their computation. The master proposes the block with the executed transactions and the PoW, updates its local chain (

10
), and broadcasts it to all peers in the blockchain network for validation. A successful validation by a majority of peers and addition of the block to the consensus blockchain will result in the workers of the mining community receiving the incentive fee for that block based on a pre-agreed condition.

Iii-A2 Functions of the Validator

After receiving a block from a miner (Fig. 1,

A
), the remaining peers of the network serve as its validators. They validate the block by re-executing the transactions present in the block and check if the PoW hash matches. Verifying the PoW hash is not very expensive. When a DiPETrans community acts as a validator, the devices follow a similar approach as the first phase of mining (Fig. 1

A

G
,

i
). However, the only difference is that validators do not find the PoW

ii
, instead verify whether miner has done sufficient work to finding the correct PoW, and validate the final state computed by them based on their local chain with the final state supplied by the miner in the block (Fig. 1

E
). Alternatively, a validator can also execute the transactions serially if they are not part of any community.

We take two different approaches for the validation. In the first, miner offer hints on the dependency information as part of the transactions in the block with the validators. Specifically, the miner includes the shard ID for each transaction in the field of the block (Fig. 1

O
), and this can directly be used by the validator to shard the transactions for parallel validation. This avoids a call to Alg. 1 by each validator in the blockchain network. We refer to these as Sharing Validators. The second approach is the Default Validator, and here no additional details about shards are included in the block. The validator, if part of a community, may use the same Alg. 1 for static analysis of the dependencies themselves, or if a stand-alone validator may validate the transactions sequentially.

Iii-A3 Sharding of the Block Transactions

Sharding is the process of identifying and grouping the dependent transactions in a block, with one shard created per group. This is illustrated in Fig. 2. Transaction accesses the account (stateful variables) A and A, T accesses A and A, while T accesses A and A. Since T, T, and T are accessing common accounts, they are dependent oneach other and grouped into the same shard, Shard. Similarly transactions T, T, and T are grouped into Shard, while T, T, and T are grouped into Shard. Transactions in each shard are independent from those in other shards, and each shard can be executed in parallel by different slaves of the community.

Fig. 2: Sharding of the transactions in a block
Data: txnsList
Result: sendTxnsMap
1 Procedure Analyze():
          // prepare AdjacencyMap, ConflictMap, AddressList to find WCC
2          Mapaddress,ListtxID conflictMap;
3          Setaddress addressSet;
4          Mapaddress,address adjacencyMap; for  do
5                   conflictMap[tx.from].put(tx.txID);
6                   conflictMap[tx.to].put(tx.txID);
7                   addressSet.put(tx.from);
8                   addressSet.put(tx.to);
9                   adjacencyMap[tx.from].put(tx.to);
10                   adjacencyMap[tx.to].put(tx.from);
11                  
12         MapshardID, SettxID shardsMap;
          // Call to WCC till all addresses are visited
13          = WCC (,);
14          MapworkerID,ListTransaction sendTxnsMap;
          // equally load balance the shards for slaves
15          = LoadBalance ();
16         
Algorithm 1 Analyze()

We model the problem of finding the shards as a graph problem. Specifically, each account serves as a vertex in the transaction dependency graph, identified by its address, and we introduce an undirected edge when a transaction access two accounts, identified by its transaction ID. A single transaction accessing addresses will introduce edges. Next, we find the Weakly Connected Component (WCC) in this dependency graph. Each connected component forms a single shard, and contains the edges (transactions) that are part of that component. The transactions within a single shard are present in their sequential order of arrival. Transactions that are not dependent on any other transaction are not present in this graph, and are placed in singleton shards.

The number of shards thus created may exceed the number of slaves. In this case, we attempt to load balance the number of transactions (shards) per device. Here, we assume that all transactions take the same execution time, which may not be true in practice since smart contract function calls may vary in latency, and be costlier than non-smart contract (monetary) transactions.

Iii-B Sequence of Operations

Fig. 3 shows the sequence diagram for processing a block by a miner and a validator community in DiPETrans. There are 4 roles as MasterMiner, SlaveMiner, MasterValidator, and SlaveValidator. The MasterMiner starts the block execution by creating a block from the transaction queue. The created block consists of a set of transactions

b
, including block specific information such as timestamp, miner detail (coin base address), nonce, hash of the previous block, final state, etc. The transactions of the block are formed into a dependency graph for static analysis using WCC

c
to identify disjoint sets of transactions that form shards. Load balancing and mapping of shards to slaves is done as well. The MasterMiner then sends the shards for each slave to the devices in parallel

d
and these are executed locally on each slave

e
. After successful execution, each SlaveMiner sends the updated account states back to the MasterMiner

f
. The MasterMiner updates its global account state based on the responses received from all the SlaveMiners

g
.

Once all SlaveMiner complete executing their assigned, the MasterMiner switches to the PoW phase. It assigns them the task to find the PoW for different ranges of nonces concurrently

h
. The SlaveMiner searches the range to solve the PoW for the block

i
and sends back a response either when the PoW is solved or their nonce range has been completely searched

j
.

Finally, the MasterMiner broadcasts the block containing the transactions, the updated account states, the PoW, and optionally the mapping from shards to transactions to the peers in the blockchain network for validation

k
.

Fig. 3: Sequence diagram of operations during mining and validation

When a MasterValidator receives a block to verify, it needs to re-execute the block transactions and match the resulting account states with those present in the block. For this MasterValidator, either use the shard information present in the block (shared validator) or if not present, determine it using the same dependency graph approach as the MasterMiner

c
. Then MasterValidator assigns the shards to the SlaveValidator s

l
. After successful execution of the transactions assigned by MasterValidator

m
, each SlaveValidator returns the account states back to the MasterValidator

n
. The responses are verified by the MasterValidator with the states present in the block

o
. The MasterValidator also confirms that miner has correctly found the PoW

p
. After successful verification of both these checks, the MasterValidator accepts the block and propagates the message to reach the consensus.

Iv Experiments and Results

In this section we first provide a general overview of implementation detail; then we present the transactions workload and experimental setup; and in the end, performance analysis presents the analysis based on execution time and speedup achieved by the proposed approach over the serial.

Iv-a Implementation

Incorporating our proposed approach into an existing blockchain framework like Ethereum is time-consuming due to the complexity of the codebase of these current platforms. Instead, we implement a stand-alone version of a blockchain framework that models the peers as a set of micro-services that perform the various mining and validation operations that are essential to a blockchain network. This also includes the operations proposed in DiPETrans. The implementation is in C++ using the Apache thrift cross-platform micro-services library.

Iv-B Transactions Workload

We use historic transactions from the Ethereum blockchain in our experiments. These are acquired from the public-data archive available on Google’s Bigquery Engine [33]. We have chosen the transactions starting from the block number , which forms a hard fork when Ethereum changed the mining reward from to Ethers. We extract blocks containing transactions. While the original transactions had fields, we selected 6 fields of interest as part of our workload. These include the from_address of the sender, the to_address of the receiver, value transferred in Wei, the unit of Ethereum currency, input data sent along with the transaction, receipt_contract_address which is the contract address when it is created for the first time, and block_number where this transaction was present in.

There are two types of transactions we consider: monetary transactions and smart contract transactions [6]. In the former, also known as value transfer or non-contractual transaction, coins are transferred from one account to another account. This is a simple and low-latency operation. In a contractual transaction, one or more smart contract functions are called. As we analyzed the Ethereum transactions, we found out there are unique functions in contracts, out of which top 11 (most number of times called) function calls covers of contract transactions. We re-implement these contract functions from the Solidity language used by Ethereum into C++ function calls that can be invoked by our framework.

Of the transactions present in the Ethereum blocks we consider, are contract transactions. However, the wider use of smart contracts will see them have a larger fraction in future, compared to just monetary transactions. Contract transactions are also more compute intensive to execute than the monetary transactions, and benefit more from our parallelism. Hence, we create workloads with different ratios between the contract and monetary transactions: . Each block formed by our miners have 100 to 500 transactions that are in this ratio, depending on the workload used in an experiment (see Table I Appendix -A).

Iv-C Experimental Setup

We used a commodity cluster to run the master and slaves in the mining and validation communities for our DiPETrans blockchain network. Each node in the cluster has an 8-core AMD Opteron 3380 CPU with 32 GB RAM, and are connected using 1 Gbps Ethernet. A mining community has a master running on one node, and between one to five slaves each running on a separate node, depending on the experiment configuration. Similarly, a validation community has one master and between one to five slaves.

Fig. 4: Workload-1: Average Transaction Execution Time by Miner (Without Mining) and Validator
Fig. 5: Workload-2: Average Transaction Execution Time by Miner (Without Mining) and Validator
Fig. 6: Workload-1: Average Speedup by Miner (Without Mining) and Validator for Transaction Execution
Fig. 7: Workload-2: Average Speedup by Miner (Without Mining) and Validator for Transaction Execution
Fig. 8: Average End-to-End Block Creation Time by Miner with Mining
Fig. 9: Average End-to-End Block Creation Speedup by Parallel Miner over Serial Miner with Mining

Iv-D Performance Analysis

For each simulated data, blocks are executed for serial and 1 to 5 slaves configurations. The serial results serve as the baseline for comparing the performance. Block Execution time is broken down to compare the time taken by transaction execution time along with end-to-end execution time on per block basis. Block execution with mining gives end-to-end block creation time at the miner while without mining provides the transaction execution time at the miner. However, transaction execution time at validator includes the time taken by transaction re-execution and verification. For Default Validator time taken by static analysis also included as the part of the time taken by the validator.

We selected 3 different workloads for the analysis and average total time required over all the blocks. In Workload-1, the number of transactions varies from 100 to 500, while execution time is averaged across the data set runs (contract : non-contract transactions = 1:1, 1:2, 1:4, 1:8, 1:16). In Workload-2, the data set varies from 1:1 to 1:16, and transactions remain fixed to 500 per block. While, in Workload-3, transactions is fixed to 500 per block, and community size varies from 1 to 5. For all this analysis, we computed execution time and speedup (with and without mining). The analysis for Workload-3 and tables for execution time and speedup is given in Appendix -C.

Iv-D1 Block Execution Time without Mining

This section presents the experimental analysis done for transaction execution time at miner without mining and at validator. In all the figures, serial execution served as a baseline. The subfigure (a) shows the line plots for mean transactions execution time taken by the miner. subfigure (b) shows the transactions execution time taken by Default Validator while subfigure (c) shows transactions execution time taken by Sharing Validator. The subfigure (d) shows comparison for average transactions execution time taken by Default Validator and Sharing Validator.

Workload-1: Fig. 4 shows the average transactions execution time taken by the miner and validator (Table III Appendix -C) . As shown in the Fig. 4(a), Fig. 4(b), Fig. 4(c), and Fig. 4(d) the time required to execute transactions per block increases as the number of transactions increases in a block. Also, the 1 slave is performing worst due to the overhead of static analysis and communication with the master. Other slave configurations from 2 to 5 are all better than serial execution.

Fig. 4(d) shows the line plot comparison for mean transactions execution time taken by Default Validator and Sharing Validator. The only difference between these two validators is that Default Validator needs to run static analysis on block transactions before execution. The Default Validator is supposed to take more time compared to Sharing Validator. But the experiment shows no significant difference (can be observed in Table III) as the static analysis is taking very less time.

Workload-2: In this workload, transactions per block are fixed to 500. Fig. 5 (Table IV Appendix -C) shows the line plots of mean transactions execution time taken by the miner and validator. In Fig. 5(a), 5(b), and 5(c), it can be seen that by varying the ratio of contractual to non contractual transaction i.e., when the number of contract transactions decreases per block, the overall time required to execute transactions also decreases because contractual transaction includes the external calls.

Another observation is that serial execution outperforms 1 slave configuration. Because in 1 slave configuration, there will be overhead of static analysis and communication with the master. However, all other slave configurations from 2 to 5 performance increases with an increase in the number of slaves in the community. But with the increase of non-contractual transaction in the block serial execution started giving better performance because it may be possible that communication time increases in the proposed parallel approach. In Fig.  5(c), we can see that the time required to execute more transactions per block decreases as the number of contract transactions decreases.

Fig. 5(d) shows that the parallel validator is always taking less time than serial, and we can also observe a significant gap when number of non-contractual transactions per block increases. However, the Default Validator is supposed to take more time compared to Sharing Validator. But the experiment shows that the analyze function is not taking much time, so there is no much difference between them.

Speedup Analysis: The speedup for Workload-1, Workload-2, and Workload-3 is given in the Fig. 6 (Table V), Fig. 7, and Fig. 11 (Table VI) respectively. In all the experiments, serial execution is considered as a baseline, and parallel execution speedup is shown in the form of the line chart. We achieved speedup, using just 5 slaves in the community. It can be observed that speedup is linearly increasing from 2 slaves to 5 slaves, so it will increase further if the community size increases or the number of transactions per block increases. However, 1 slave community is not achieving any speedup over serial due to static analysis and communication cost.

Further, Sharing Validator was assumed to outperform Default Validator, but their performance is almost the same, which means static analysis is not taking much time. We have achieved on an average , , and speedup respectively for parallel miner, Default Validator and Sharing Validator. Achieved max speedup of for miner, for Default Validator and for Sharing Validator for 100 to 500 transactions per block. Experiments show that parallel execution gives better speedup and can be used by the trusted node to form a community to receive the incentive by mining the block in the blockchain.

Iv-D2 Block Execution Time with Mining

Here we present the analysis for end-to-end block creation time at miner with mining. We have observed that with the increase in the number of transactions per block end-to-end time is increased for fining the PoW. This is because the difficulty is fixed for all the experiments at the start of the experiments. Finding PoW is a random process so we cannot guarantee the maximum time to find PoW.

Existing blockchain platforms calibrate the difficulty to keep the mean end-to-end block creation time within limits like in Bitcoin; the block is created every 10 minutes and every two weeks 2016 blocks. After every 2016 blocks, the difficulty is calibrated based on how much time is taken if it has taken more than two weeks, the difficulty is reduced otherwise increased also several other factors are also considered to change the difficulty. While in Ethereum blockchain, roughly every 10 to 19 seconds a block is produced, so the difficulty is fixed accordingly. After every block creation, if mining time goes below 9 seconds, then Geth a multipurpose command line tool that runs Ethereum full node tries to increase the difficulty. In case when the block creation difference is more than 20, Geth tries to reduce the mining difficulty of the system. In the proposed DiPETrans framework throughout the experiments, we fixed the difficulty, and due to that reason with an increase in transaction per block increase in mining time can be observed.

As shown in Fig. 8 and Fig. 9, when the number of slaves increases in the community end-to-end block creation time decreases. In Workload-1 with an increase in the number of transactions per block time taken by mining algorithm also increases, as shown in 8(a). While as shown in Fig. 8(b) when monetary transaction increases per block mining time sometimes increase and sometimes decrease. In both of these observations, the possible reason can be that we are not changing the difficulty while varying the transaction and transaction ratio. In Fig. 9 (a) and (b), with the varying number of transactions and transaction ratio the speedup is almost similar but with an increase in community members speedup is higher over the smaller community as well over serial.

Due to page limitation remaining results are presented in appendix as follows: Remaining results for transaction execution time and speedup in Appendix -C. Block execution time (end-to-end time) at miner Appendix -D. For varying number of transaction from 500 to 2500 Appendix -E.

V Conclusion

The main question we try to answer in this work is that, is it possible to improve the throughput by adding distributed power collectively through mining pools in the current blockchain system? Hence we proposed a distributed framework DiPETrans to execute transactions of block parallelly on multiple trusted nodes (part of the same community). We tested our prototype on actual Ethereum transactions and achieved linear performance gain to the number of workers. We saw performance gain in the runtime of contract and monetary transaction. We tested DiPETrans on different workloads where number of transactions varies from 100 to 500 with varying contract transactions : non-contract transactions = 1:1, 1:2, 1:4, 1:8, 1:16. We found that with the increase in the number of transactions per block, speedup also increases in a distributed setting and helps in improving the throughput. We observe that if the number of contract call increases in a block execution time also increases. We also observed that speedup linearly increases with the increase in the size of the community. In literature, there is no such study done before, and that can be the novelty of this work.

As part of our future work, assuming the number of transactions will increase over time, we can think of a community to work on proposing multiple blocks parallelly. As suggested earlier, another way to improve performance is to use STM at slave nodes and parallelly execute the transactions using multi-cores instead of serial execution. In this work, we have assumed that there are no nested contract calls, but we can think of an approach to provide support for those transactions. Also, we can further improve the performance of mining by making use of all the cores available with the system and dividing the search space for POW based on that. Apart from the above optimization, we are also planning to adopt a distributed approach within the community instead of the master-slave approach.

References