XOX Fabric: A hybrid approach to transaction execution

06/26/2019
by   Christian Gorenflo, et al.
0

Performance and scalability are a major concern for blockchain systems to become viable for mainstream applications. While many permissionless systems are limited by slow the consensus algorithms, Hyperledger Fabric has unique throughput optimization potential due to its permissioned nature. It has been shown to handle tens of thousands of transactions per second. However, these numbers show only the nominal throughput for uncontested transaction workloads. If incoming transactions compete for a small set of hot keys in the world state, the effective throughput drops drastically. We propose a novel two-pronged transaction execution approach that minimizes invalid transactions in the Fabric blockchain while maximizing concurrent execution.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/03/2019

FastFabric: Scaling Hyperledger Fabric to 20,000 Transactions per Second

Blockchain technologies are expected to make a significant impact on a v...
01/11/2022

Utilizing Parallelism in Smart Contracts on Decentralized Blockchains by Taming Application-Inherent Conflicts

Traditional public blockchain systems typically had very limited transac...
07/31/2017

Debugging Transactions and Tracking their Provenance with Reenactment

Debugging transactions and understanding their execution are of immense ...
07/16/2020

Denial-of-Service Vulnerability of Hash-based Transaction Sharding: Attacks and Countermeasures

Since 2016, sharding has become an auspicious solution to tackle the sca...
06/27/2019

DiPETrans: A Framework for Distributed Parallel Execution of Transactions of Blocks in Blockchain

In most of the modern day blockchain, transactions are executed serially...
03/07/2019

Scheduling OLTP Transactions via Machine Learning

Current main memory database system architectures are still challenged b...
11/26/2019

Index-Based Scheduling for Parallel State Machine Replication

State Machine Replication (SMR) is a fundamental approach to designing s...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Blockchain systems have evolved from their beginnings as tamper-proof append-only logs. With the addition of smart contracts complex computations based on the blockchain’s state become possible. In fact, multiple systems both in the permissionless and permissioned blockchain context such as Ethereum [7] and Hyperledger Fabric [2] allow Turing-complete calculations.

However, smart contracts come with a catch. Uncoordinated execution of contracts in a decentralized network can result in inconsistent states if there are dependencies. Blockchain systems have two options to settle such conflicts. They can either coordinate, i.e. linearize, smart contract execution or they can resolve inconsistencies after independent execution.

Most systems implement linear smart contract execution. This means they order transactions before they execute the corresponding smart contracts, giving this model the name order-execute (OX). Therefore, the smart contract execution happens sequentially. This allows each execution to act on the result of the previous execution, but also restricts the computations to a single thread. Blockchains using this model must also guarantee that the smart contract execution reaches the same result on any node in the network that replicates the chain. This makes the use of external data sources, so-called oracles, extremely difficult, because they cannot be directly controlled and might deliver different data to the various nodes in the network.

On the other hand, Fabric adopts the opposite model of execute-order (XO). Smart contracts called by the transactions are executed regardless of order and in complete isolation. Afterwards, only the results of these computations are ordered and put into the blockchain. This parallelized smart contract execution allows, among other benefits, a nominal transaction throughput many orders of magnitude higher than that of other blockchains [3]. Yet, a model that executes each transaction in isolation is inherently incapable of detecting semantic transaction conflicts during the transaction execution, as illustrated by the following example. Take a smart contract that allows transfers of digital coins from one account to another. Then, assume one transaction tries to add 40 coins to an account currently holding 100 coins while in close succession another transactions subtracts 20 coins from the same account. One will calculate 140 coins as the account’s final balance and the other 80 coins because neither has knowledge of the other transaction. With an XO model, Fabric cannot re-evaluate the results after they are ordered, it can only choose to accept either 140 or 80 as the final result and discard the other. To do this correctly, it has to filter out invalid transactions in a sequential validation pass after the order is known. This bottleneck decreases the effective transaction throughput to a fraction of the nominal throughput if the percentage of conflicting transactions per block is large.

Prior work on contentious workloads in Fabric focuses mostly on early transaction abort and achieves this often only by tightly coupling the separate parts of the Fabric network, losing its modular structure in the process. Furthermore, early abort only treats a symptom and not the cause. It only filters out invalid transactions to make room in submitted blocks instead of preventing invalid execution in the first place. This approach will not help in a case where many transactions try to modify a small number of accounts. For example, if the network supports a throughput of 1000 transactions per second and 20 transactions of each block of 100 transactions tries to access a single account, then only one of 20 will become valid and the rest is aborted early. Now, all clients attempt to re-execute their transactions adding to the 20 new conflicting transactions would be submitted anyway. This leads to 38 aborted transactions in the next round. The number of aborted transactions grows linearly until it surpasses the throughput of the system after a short while. At this point, the whole network is stalled indefinitely if the clients insist on re-executing aborted transactions.

Our proposed approach can deal with such highly skewed workloads and leaves the decoupled and modular structure of the Fabric architecture intact. Our contributions are as follows:

  • Introducing a hybrid execution model: Instead of choosing between order-execute and execute-order models we propose a hybrid execute-order-execute () model. This allows us to choose an optimal trade-off between concurrent high-performance execution and consistent linear execution which prevents the cumulative re-execution death of the system.

  • Use of external oracles in the post-order execution phase: Without another round of consensus on the computation output of the post-order execution phase current blockchains like Ethereum rely on deterministic code execution. This makes the use of external oracles very difficult. We show how we can extend our basic hybrid model to allow easy access to external data in the second execution phase.

  • Concurrent transaction commitment: Fabric’s current semantic transaction validation as well as transaction commitment are done in a single thread. By analyzing the dependencies between transactions in a block we are able to achieve higher throughput by parallelizing these steps.

Ii Hyperledger Fabric

The most prominent proponent of the XO model, Hyperledger Fabric, has been described in detail by Androulaki et al [2]. Therefore, we will only give a brief synopsis of those parts of the Fabric architecture that are relevant to this work.

A Fabric network consists of peer nodes replicating the blockchain and world state and a set of nodes called the ordering service with the sole purpose of ordering transactions into blocks. The nodes can belong to different organizations collaborating on the same blockchain. Because of the strict separation of concerns, Fabric’s blockchain model is completely agnostic to the kind of consensus algorithm in use. In fact, the official release 1.4.1 supports the three plug-and-playable algorithms solo, Kafka and Raft out of the box. As we will show in section V we preserve Fabric’s modularity completely.

Apart from replication and ordering, Fabric also needs a way to execute its equivalent of smart contracts, called chaincode. Endorsers, a subset of peers, fill this role. Each transaction proposal an endorser receives is executed in isolation. A successful execution of arbitrarily complex chaincode results in a read and write set (RW set) of {key, value, version} tuples. They act as instructions for a state transition of the world state. The endorser then appends the RW set to the initial proposal, signs the response, sends it back to the requesting client and discards the simulated transaction before executing the next one.

To combat non-determinism or malicious behaviour during chaincode execution, endorsement policies can be set up. For example, a client might be required to collect identical results from three endorsers across two different organizations before being allowed to send the transaction to the ordering service.

After transactions are ordered into blocks, they are disseminated to all peers in the network. These peers first independently perform a syntactic validation of the incoming blocks and check the adherence to endorsement policies. Lastly, they sequentially compare each transaction’s RW set to the current view of the world state. If the version number of any key in the set disagrees with the world state, the transaction is discarded. Additionally, any RW set overlap across transactions in the same block also leads to an invalidation of all but the very first conflicting transaction. As a consequence of this execution model, Fabric’s blockchain also contains invalid transactions which every peer independently flags as such during validation and ignores during commitment to world state. In a worst case scenario, all transaction in a block might be invalid. This can drastically reduce the effective transaction throughput of the system.

Iii Related Work

Improving performance is an important issue for blockchain systems, since they are still far slower than traditional database systems. While most research focuses on inventing new consensus algorithms, little work has been done to optimize other aspects of the transaction flow, especially transaction execution.

We base this work on FastFabric, our previous optimization of Hyperledger Fabric [4]. We introduced more efficient data structures, caching and increased parallelization in the transaction validation pipeline to increase Fabric’s throughput for conflict-free workloads by a factor six to seven. The numbers we presented resulted from a conflict-free transaction workload. Now, we extend our findings to handle arbitrary contentious workloads.

As far as we are aware, a development document from the Fabric community [6] is the first to propose a secondary post-order execution step for Fabric. However, the document proposes a set of available commands consisting only of addition, subtraction and number range checks to evaluate if a number is in a certain interval. Furthermore, this secondary execution step is always triggered regardless of circumstance and no mind is paid to parallel execution. This drastically diminishes the value of retaining the first pre-order execution step and introduces the same bottleneck that OX models have to deal with.

Amiri et al [1] introduce their ParBlockchain using a very similar architecture to Fabric’s but with an OX model. Here, the ordering service also generates a dependency graph of the transactions an a block. Subsequently, all transactions in the new block are distributed to nodes in the network to be executed, taking the dependencies into account. Only a subset of nodes executes any given transaction and shares the result with the rest of the network. Their approach has two major drawbacks. First, they require the ordering service to figure out transaction dependencies before they are executed. Not only would the orderers have to have complete knowledge about all installed smart contracts to do that, it would also drastically restrict the complexity of allowed contracts. Even if just a single conditional statement relies on a state value, for example Read the value of key , where is the value to be read from key , reasoning about the result becomes impossible. Second, depending on the workload it can be necessary that all nodes have to communicate the current world state after every transaction execution to resolve execution deadlocks. This leads to a vast networking overhead.

Sharma et al [5] approach blockchains from a classical database point of view and attempt to incorporate concepts like early abort and transaction reordering into Hyperledger Fabric. However, they ignore its modular design and closely couple the different building blocks. For both early abort and transaction reordering the ordering service needs to have a deep understanding of the transaction content to be able to unpack and analyze RW sets. Furthermore, transaction reordering only works in pathological cases. Whenever a key appears both in the read and write set, which is the case for any application that transfers any kind of asset, reordering will not eliminate RW set conflicts. While their early transaction abort might increase overall throughput slightly, it cannot solve the problem of hot keys and only skews the transaction workload away from those keys.

Lastly, Zhang et al [8] present a solution for a client-side early abort mechanism for Fabric. They introduce a transaction cache on the client that analyzes endorsed transactions to detect RW set conflicts and only sends conflict-free transactions to the ordering service. Transactions that have dependencies are held in the cache until the conflict is resolved and then they are sent back to the endorsers for re-execution. This approach prevents invalid transactions from a single client, but cannot deal with conflicts between multiple clients. Moreover, it cannot deal with hot key workloads either.

Iv State machine replication and invalid state transitions

We can understand blockchain systems as a state machine replication mechanism. Each node in the network stores a replica of a state machine with a single genesis block as its START state. In this context, smart contracts become state transition functions. They take client requests, commonly referred to as transactions, as input and compute a state transition which can be subsequently committed to the world state. This world state is either implicitly created by the data on the blockchain or explicitly tracked in a data store, most commonly a key-value store. Because of blockchain’s inherently decentralized nature, keeping the world state consistent on all nodes is not trivial. A node’s stale state, a client’s incomplete information, parallel smart contract execution or malicious behaviour can all produce conflicting state transitions. Therefore, a blockchain’s execution model must be robust enough to prevent such transactions from modifying the world state. There are two possibilities to accomplish this and we will describe both.

Iv-a The OX model

The commonly deployed solution is to first create a global ordering of transactions, i.e. blocks, and then to have every node compute the state transitions independently, giving it the name order-execute (OX). This guarantees a common linearization of transactions in a block. It requires certain restrictions on the execution engine to also guarantee that each node comes to the same state transition results. First, special care must be taken so the output of the execution engine is deterministic. This means external oracles cannot easily be incorporated because different nodes in the network might not receive the same information from the oracle. Second, depending on the allowed code complexity of smart contracts there needs to be a mechanism to deal with the Halting Problem. In practice, even long-running but provably terminating code can be uneconomical to run. A common solution to this problem is the inclusion of an execution fee like Ethereum’s gas. Here a client effectively pays for a certain amount of CPU time. If the execution takes longer it is automatically aborted. Because the transaction execution order is known, each state transition can be applied directly after its transaction was executed. This way, each consecutive execution always operates on the most current view of the world state. Therefore, the model completely prevents invalid transactions because of inconsistent state transitions. The only way a transaction can be invalid is that either the smart contract logic discards it or its execution runs out of gas (or is stopped by an equivalent halting mechanism).

Iv-B The XO model

In the execution-order (XO) model, transactions are executed in arbitrary order and the resulting state transitions are then put into ordered blocks. This allows for transactions to be executed in parallel to increase throughput. However, the world state at the time of state transition commitment is not yet known at the time of execution. So, all transactions are inevitably executed on a stale viw of the world state. This makes it possible for transactions to result in invalid state transitions even though they executed successfully before ordering. It necessitates a validation step after ordering so transitions can be invalidated deterministically based on detected conflicts. Consequently, for a transaction workload with a set of frequently updated keys the effective throughput of a system with an XO model can be a lot smaller than the nominal throughput.

Hot Key Theorem.

Let be the average time between a transaction’s execution and its state transition commitment. Then the average effective throughput for all transactions operating on the same key is .

Proof.

Let denote the number of changes to an arbitrary but fixed key .

:

For to exist there must be exactly one transaction which takes time from execution to commitment and creates with version .

:

Let ’s current version be at time . Let be the transaction committed at time which updates to a new version . Let be the time between ’s execution and commitment. By necessity, the version of during ’s execution must have been , otherwise Fabric would invalidate and prevent commitment. Therefore, it must be

Likewise, no transaction which is ordered after can commit an update because already changed the state and would therefore be invalid. Consequently, must be the only transaction able to update from to a newer version.

This means, updates to take time with

A lower bound on the average update time is then given by

with the throughput being its inverse .

This theorem has a crucial consequence. As an example, FastFabric can achieve a nominal throughput of about 20,000 transactions per seconds, yet even an unreasonably fast transaction life cycle of from execution to commitment would result in a maximum of 20 updates per second to the same key, or once every ten blocks with a block size of 100 transactions. Even worse, transactions are not only invalidated when they use the same single key, but also if any key they try to modify overlaps with a previous transaction. This means workloads with hot keys can easily penalize the effective throughput by several orders of magnitude.

While early abort schemes can discard invalid transactions before they become part of a block, they cannot break the theorem. Assuming they result in blocks without invalid transactions, they can only fill up the slots in a new block with transactions using different key spaces. Thus, they skew the processed transaction distribution and do not reflect the actual demand any more. Furthermore, aborted transactions need to be re-executed and re-submitted, flooding the network with even more attempts to modify hot keys. If no other mechanisms are put into place, this will lead to a complete blockage of endorsers by clients trying to get their invalid transactions re-executed in a short amount of time.

V The hybrid model

While the XO model allows for higher transaction throughput due to parallel chaincode execution, the effective throughput suffers with workloads of contentious transactions. We propose an execute-order-execute () model by adding a secondary post-order execution step to minimize transaction conflicts while preserving concurrent block processing. We achieve this without the introduction of any centralized element. In the following, we will describe the minor changes that are necessary for the pre-order execution step done by the endorsers and then the changes we make to the critical transaction flow path on the peers after they receive blocks from the ordering service. The details of the crucial steps we introduce are described in sections VI and VII. Most notably, our changes leave the ordering service completely untouched, preserving Fabric’s modular structure.

V-a Pre-order endorser execution

The pre-order execution step leverages concurrent transaction execution and makes use of full programming languages like Go. Based in the endorsement policy, clients must request multiple endorsers to execute their transaction and the returned results in the form of RW sets have to be identical. This comparison of results makes a deterministic execution environment unnecessary. Most notably, this also gives this step access to external oracles like weather or financial data. If this oracle data leads to non-deterministic RW sets the client will not be able to combine endorser responses and the transaction will never even reach the Fabric network.

External oracles are a powerful tool and we want to give the post-order execution step of our hybrid model access to them as well. To this end, we use the same mechanism that ensures deterministic transaction results for the pre-order execution. We simply extend the transaction response by an additional oracle set. Any external data that should be made available is recorded in the form of key-value pairs and are added to the response to the client. Now, if the oracle sets for the same transaction executed by different endorsers differ the client has to discard the transaction. Otherwise, the external data effectively becomes part of the deterministic world state so that it can be used without risk of inconsistencies by the post-order execution step.

V-B Critical transaction flow path

Because we base the model on our previous work on FastFabric [4], we aim to preserve our previous optimizations. Instead of processing one block at a time, we have showed how we can pipeline syntactic block verification and endorsement policy validation (EP validation) for transaction so that it can be done for multiple blocks at the same time. However, the RW set validation to check for invalid state transitions and the final commitment still had to be done sequentially in a single thread. Now, we expand our concurrency efforts to incorporate these last sequential steps in the block pipeline.

To achieve this, we need to add two steps to the critical path on the peers, a dependency analyzer and the new post-order execution step of the hybrid execution model. We describe both in later sections in detail, so we will only give a brief overview of these steps at this point and otherwise regard them as given. This allows us to concentrate on the pipeline integration.

V-B1 Dependency analyzer

For concurrent transaction processing we rely on the ability to isolate them from each other. However, the sequential order of transactions in a block matters when their RW sets are validated and they are committed. A dependency exists when two transactions overlap in some keys of their RW sets. In that case, we cannot process them independently. Therefore, we need to keep track of dependencies between transactions so we know which subsets of transactions can be processed concurrently.

V-B2 Execution step

Transactions for which the dependency analyzer has found a dependency on an earlier transaction would be invalidated during Fabric’s RW set validation. We introduce a step which re-executes transaction with such an RW set conflict based on the most up-to-date world state. It can resolve semantic conflicts that emerged because of a lack of knowledge of concurrent transactions. Yet, it will still invalidate transactions if they attempt something the smart contract does not allow like creating in a negative account balance.

In FastFabric, peers receive blocks as fast as the ordering service can deliver them. If the syntactic verification of a block fails the whole block is discarded, so it is reasonable to keep this as a first step in the pipeline. Note that all received blocks can be checked concurrently. After this step, the block boundaries are meaningless from the perspective of world state modification. Therefore, we can now regard the verified blocks as sources of a batched transaction stream and send the transaction to the EP validation step. Each transaction can be validated in parallel because the validations are independent of each other.

Here we add a new step into the pipeline. Because the next step in the pipeline is the RW set validation we need the dependency information before it starts. Therefore, the dependency analyzer works in parallel to the EP validation. As we will show in section VI, this can also be done concurrently for each transaction in the pipeline. However, at this point after its dependency analysis a transaction has to stall in the pipeline, until all previous transactions are analyzed as well. Otherwise, we could not be sure if there exists a dependency to a transaction that simply had not been processed yet but is ordered before the transaction in question.

At this point, all transactions that do not show any dependencies can go through the regular RW set validation step in parallel. If no conflict with the current world state is detected, they can also be committed concurrently. If a conflict arises, they need to be sent to the new execution step to be re-executed based on the current world state. Subsequently, transactions that are sucessfully re-executed are committed, all others are discarded. Those transaction that showed dependencies after the analyzer was done and all information of previous transactions was available stall until the transactions they are dependent on are either committed or discarded and only then they proceed to the RW set validation as previously described.

Without these changes, every transaction had to wait until all previous transactions completed the RW set validation step. Now, dependency analysis works in parallel to EP validation and transactions can proceed as soon as all previous dependencies are known. Specifically, independent sets of transactions can pass through RW set validation, post-order execution and commitment steps concurrently.

Vi Dependency analyzer

It is unnecessary to force global transaction serialization. As previously discussed, sets of independent transactions can be processed concurrently. But to obtain this dependency information, we have to introduce a new mechanism into the critical path of the peers.

The only way for a transaction to have a dependency is an overlap in its RW set with a previous transaction. More precisely, we have two cases. In the first case, transaction is ordered before transaction , but accesses an earlier version of a specific key than . By the time it is ’s turn to be committed, it would operate on an invalid state. Therefore, is dependent on the outcome of ’s commitment and has to wait for its conclusion. In the second case, transaction reads a specific version of a key and then transaction updates that key to a new version. Even though the write is not semantically dependent on the earlier read we have to mark this as a dependency. If we would execute those transaction in isolation we could not guarantee that transaction would read the earlier version of the key.

To detect such conflicts, we keep track of read and write accesses to all keys across transactions. For each key, we create a linked list that acts as a dependency queue recording all transactions that need to access the it. Entries in this queue are sorted by the blockchain transaction order. These lists act as queues for subsequent scheduling. After the analysis of a transaction is complete it will not continue to the next step in the pipeline until all previous transactions have also been analyzed, lest an existing dependency might be missed.

Given a transaction for which the knowledge of all previous transactions is complete, a decision must be made. If it is not in first position in any queue for a key in its RW set, it has to wait until all transactions proceeding it have been completely processed. When it finally reaches the first position in all queues it can be sent to the RW validation step. After a transaction is either committed or discarded, its entries in the queues of the keys in its RW set are all removed.

Vii Post-order execution step

When the RW validation finds a conflict between a transaction’s RW set and the world state that transactions needs to be re-executed to possibly salvage it. Hereby, the post-order execution stage needs to adhere to some constraints. The new RW set output must be a subset of the original RW set so the dependency analyzer can reason properly. Without this restriction new dependencies could suddenly emerge and transactions scheduled for parallel processing would now create an invalid world state. Apart from internal consistency, the blockchain network also needs consistency among peers. Therefore, the post-order execution must be deterministic so there is no need for further consensus between peers. Lastly, this new execution step is part of the critical path and thus should be as fast as possible.

We propose the use of a modified version of Ethereum’s EVM [7] for this task. The input for smart contracts in this stage take a transaction’s read set and oracle set as input. The read set can then be used to get the current key values from the world state. Based on those and with the help of the oracle set the smart contract can then perform the necessary computations to generate a new write set. Should the transaction not be allowed by the logic of the smart contract based on the updated values, it can immediately discard it. Finally, in case of success it will put out an updated RW set, which is then compared to the old one. If all the keys are a subset of the old RW set, the result is valid and can be committed.

As an example, imagine client uses Fabric to try to add 70 digital coins to an account with a current balance of 20 coins. Simultaneously, client tries to add 50 coins to the same account. They both have to read the key of the account, update its value and write the new value back, so the account’s key is in both transactions’ RW set. Even if both clients are honest, only the transaction which is ordered earlier would be committed. Without loss of generality, assume that ’s transaction updates the balance to 90 coins because it won the race. In such a case, ’s transaction would wait for to finish due to its dependency and then find a key version conflict in the RW validation step. Therefore, it is sent to the post-order execution step. Now it can read the updated value from the database and add its own value for a total of 140 coins, which is recorded in its write set. After successful execution the RW set comparison is performed and the new total will be committed.

If on the other hand we start with an account balance of 100 coins and tries to subtract 50 coins and tries to subtract 60 coins we get a different result. Again, ’s transaction would be sent to be re-executed. But this time, it tries to subtract 60 coins from the updated 50 coins and the smart contract does not allow a negative balance. Therefore, ’s transaction will be discarded, even though it was re-executed based on a current world state.

This shows that our hybrid approach can correct transactions which would have been discarded because they were executed based on a stale world state. However, transactions which lead to a world state which is undesired by the smart contract logic keep being invalidated.

Lastly, if we do not put any restrictions on the execution we risk long expensive computations, low throughput and even non-terminating smart contracts. Ethereum deals with this by introducing gas. If a smart contract runs out of gas, the process is aborted and the transaction discarded. As of yet, Fabric does not include such a concept.

As a solution, we introduce virtual gas as a tuning parameter for system performance. Instead of originating from a bid by the client that proposes the transaction it can be set by a system administrator. If the post-order step runs out of gas for a transaction it becomes immediately invalidated, but in case of success the fee is never actually paid. A larger value allows for more complex computation at the cost of overall throughput. While the gas parameter should generally be chosen as small as possible, large values could make sense for workloads with very infrequent transaction conflicts and high importance of conflict resolution.

Viii Conclusion and Future Work

In this work, we propose a novel hybrid execution model for Hyperledger Fabric consisting of a pre-order and a post-order execution step. This allows for a trade-off between parallel transaction execution and minimal invalidation due to conflicting results. In particular, our solution is able to deal with highly skewed workloads where most transactions use only a small set of hot keys. Contrary to other post-order execution models we can enable the use of external oracles in our secondary execution step.

We will begin to implement a proof of concept of our hybrid model and extend this publication with experimental results as they become available.

References

  • [1] Mohammad Javad Amiri, Divyakant Agrawal, and Amr El Abbadi. ParBlockchain: Leveraging Transaction Parallelism in Permissioned Blockchain Systems. arXiv, feb 2019.
  • [2] Elli Androulaki, Artem Barger, Vita Bortnikov, Christian Cachin, Konstantinos Christidis, Angelo De Caro, David Enyeart, Christopher Ferris, Gennady Laventman, Yacov Manevich, Srinivasan Muralidharan, Chet Murthy, Binh Nguyen, Manish Sethi, Gari Singh, Keith Smith, Alessandro Sorniotti, Chrysoula Stathakopoulou, Marko Vukolić, Sharon Weed Cocco, and Jason Yellick. Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains. Proceedings of the Thirteenth EuroSys Conference on - EuroSys ’18, pages 1–15, 2018.
  • [3] Tien Tuan Anh Dinh, Ji Wang, Gang Chen, Rui Liu, Beng Chin Ooi, and Kian-Lee Tan. BLOCKBENCH: A Framework for Analyzing Private Blockchains. Proceedings of the 2017 ACM International Conference on Management of Data - SIGMOD ’17, pages 1085–1100, 2017.
  • [4] Christian Gorenflo, Stephen Lee, Lukasz Golab, and S. Keshav. FastFabric: Scaling Hyperledger Fabric to 20,000 Transactions per Second. IEEE International Conference on Blockchain and Cryptocurrency (ICBC), pages 455–463, 2019.
  • [5] Ankur Sharma, Felix Martin Schuhknecht, Divya Agrawal, and Jens Dittrich. How to Databasify a Blockchain: the Case of Hyperledger Fabric. arXiv, 2018.
  • [6] Alessandro Sorniotti, Angelo De Caro, Baohua Yang, Binh Nguyen, Manish Sethi, Vukolic Marko, Sheehan Anderson, Srinivasan Muralidharan, and Parth Thakkar. Fabric Proposal: Enhanced Concurrency Control. 2017.
  • [7] Gavin Wood. Ethereum: a Secure Decentralised Generalised Transaction Ledger. Yellow Paper, 2014.
  • [8] Shenbin Zhang. A Solution for the Risk of Non-deterministic Transactions in Hyperledger Fabric. IEEE International Conference on Blockchain and Cryptocurrency (ICBC), pages 253–261, 2019.