Since its inception, blockchain technologies have gained significant attentions from both industry and academia in recent years. In stark contrast to traditional Peer-to-Peer (P2P) networks, blockchain technologies provide a truly decentralized and reliable mechanism, which allow users to communicate securely with each other without the need to trust any third parties. As a result, blockchains have been used as a platform for large-scale distributed systems to enable a wide range of decentralized applications (dApps), such as digital currencies , food traceability , etc.
However, as blockchain applications become more prevalent and the daily transactions in blockchains continue to rise, several fundamental performance issues of blockchain systems also emerge. One pressing performance issue is the high cost resulting from skyrocketing mining energy expenditure in operating blockchain systems. For example, in PoW-based (proof-of-work-based) blockchains such as Bitcoin, a miner has to invest significant computational power to find a solution and show PoW faster than other competing miners. Due to this competitive nature and the fact that the probability of mining the next block is positively correlated to the computational resources used for solving the puzzle, miners tend to consume vast amounts of electricity and buy various types of hardware (e.g., CPU, GPU, FPGA, ASIC, etc.) to accelerate their mining processes. Indeed, studies have shown that the amount of global energy consumed in Bitcoin mining is almost the same as electricity used in Ireland.
To achieve a more sustainable growth of blockchain systems, several new mechanisms have been proposed to increase the mining resource efficiency, including, but not limited to, proof-of-stake (PoS), Trusted Execution Environments (TEEs), etc. However, recent studies also found that these new mechanisms either are vulnerable to adversarial attacks or only have marginal energy savings. Due to the limitations of these new blockchain designs, in this paper, we dedicate ourselves to improving the mining resource efficiency of the traditional and dominant PoW-based blockchains111PoW-based blockchian protocols own more than 90% of existing digital currencies market . . Specifically, we focus on optimizing the operations of PoW-based blockchains from a theoretical perspective by taking a queueing theoretic approach and investigate how to dynamically allocate mining resources based on the number of transactions in the Mempool to minimize the time-average mining cost (mining energy cost offset by reward), while keeping the blockchain networks stable at the same time. The main technical results are summarized as follows:
We develop a new logical queueing-based analytical model that allows us to formulate the blockchain mining cost minimization problem as a stochastic optimization problem. We note that our proposed queueing-based analytical model could be of independent interest for other research problems in blockchain systems.
Based on the above analytical model, we propose a queue-length-based online distributed dynamic mining resources allocation algorithm (DMRA) to identify near-optimal solutions. Our proposed algorithm does not require any prior statistical information of the arrival and service processes. Through theoretical analysis, we show that our algorithm comes within distance to the optimal time-average mining cost, where is a system parameter in our proposed algorithm. Meanwhile, the proposed algorithm achieves an bounded time-average queue-length.
Besides theoretical analysis, our simulation results also show that DMRA significantly reduces the mining energy cost while keeping the blockchain queue stable at the same time.
Ii Related Work
To our knowledge, in the blockchain literature, queueing related works include[8, 9, 10]. In , the authors studied the problem of transaction confirmation time, where the transaction processes are analytically modeled as an queue. In this queueing model, transactions arrive to a single server queue according to a Poisson process and are served in batch. In their model, miners are profit-driven and select transactions from Mempools according to the transaction fees, and they showed the confirmation time of transactions with a larger transaction fee are shorter than those with a smaller fee. Ricci et al.  adopted the queueing model to analyze transaction delays in Blockchain networks. They showed that transaction fees and transaction values are two important factors that influence the transaction delays. The author of  modeled the Bitcoin confirmation time as a Cramer-Lundberg process and provided a dynamics perspective to uncover the probabilistic distribution of the confirmation time of Bitcoin transactions. Our work differs from [8, 9, 10] in the following key aspects: First, all the aforementioned existing work studied transactions confirmation time, while the focus of this paper is on mining cost minimization subject to the queueing stability of the blockchain system. Second, unlike [8, 9, 10], our proposed algorithm does not rely on any statistical assumptions on the arrival and services. This is desirable because, in practice, transactions arrival and service processes are often unknown and can hardly be assumed as a particular queueing model.
Recently, there is also a line of work focusing on resource allocation problems in blockchain networks. For example, the authors in  considered a mobile blockchain network, where mobile users buy resources from the edge computing service provider (ESP) and the search for PoW solution is offloaded to the service provider. They provided an auction-based scheme to maximize the social welfare of mobile blockchain network. Similarly, the authors of  also adopted the computational offloading mechanism to manage the computing resource in mobile blockchain. They formulated a two-stage Stackelberg game between the service provider and mobile miners to maximize the profit of miners and edge service provider. Li et. al  also modeled the energy allocation as a Stackelberg game for mobile blockchain in Internet-of-Things (IoT) devices, and they applied backward induction to guarantee the benefit of microgrids and miners. We note that all of these existing works[11, 12, 13] aimed to study the resources allocation problem of mobile blockchain network in IoT devices, where the mining process is offloaded to the service provider. In contrast, we propose a logical queueing-based analytical model to investigate how to directly allocate mining resources among miners to reduce the energy cost in a general blockchain network.
Iii Analytical Model and Problem Formulation
Iii-a Blockchain Network Information Propagation: A Primer
In blockchain networks, there are four steps for transactions to be delivered to the network and finally included in the global blockchain, as shown in Fig. 1. Each miner in the network (also referred to as node in the rest of the paper) contains an exact local copy of the blockchain, and each miner holds a memory, called Mempool, where transactions are stored and waiting to be processed. When a new transaction is broadcast to the network, it will first be validated by the miner at which it arrives. If valid, the transaction will be stored in the Mempool and propagated to the other miners, see Fig. 1(a); otherwise, the entry miner will reject the transaction. Next, this transaction is broadcast to the whole blockchain networks, see Fig. 1(b). After selecting a set of transactions from their own Mempools and forming them into a block, miners compete to find a solution to PoW. When a miner successfully obtains a solution, it immediately broadcasts the new mined block to its peers, see Fig. 1(c). Other miners then check whether the solution to PoW is correct. If the majority of miners reach consensus that the solution is correct, the new block will be added to the global blockchain and transactions contained in this verified block will be removed from Mempools of all miners, see Fig. 1(d).
Iii-B A Virtual Blockchain Queueing Model
Note that in the information propagation process as shown in Fig. 1, the propagation time for spreading new transactions across the network in a blockchain is in a much shorter time-scale (e.g., seconds) compared to the block mining time (e.g., on the order of 10 minutes). This observation suggests that the propagation delays in a blockchain are negligible in practice. Hence, a new valid transaction can be viewed as arriving at the Mempools of all nodes simultaneously. Also, since all nodes maintain the same set of transactions after each propagation process is finished, the Mempools of all nodes can be viewed as a global logical queue that stores all transactions waiting to be processed at multiple servers that represent the miners, as illustrated in Fig. 2(a). In this multi-server queueing model, transactions arrive randomly and are removed from the queue when a new block is generated. Again, noting that the update time upon recovering a new block across the network is negligible, the queueing model in Fig. 2(a) can be further simplified as a single-server queue as shown in Fig. 2(b), where the single virtual server models the fact that in each time-slot, only one miner will be winning in solving the PoW.
We assume that the system operates in slotted time and . We assume that the duration of a time-slot is longer than the average block generation time. Let be the set of miners with , and each miner is indexed by . We let be the total number of resources types in the system, e.g., electricity, CPU cores, GPU units, FPGA units, etc. Let , , denote the amount of resource allocated at miner in time-slot
. For convenience, we use vectorto compactly represent all resources that miner invests to mine a new block in time-slot . We let be the weight vector of each kind of resource. Let denote the random amount time for miner to mine a new block given resources . According to , we assume that
follows an exponential distribution with rate parameterthat is dependent on resource allocation . We assume is a concave function of , which represents the “diminishing return effect”. This is because blockchain systems are fundamentally security-constrained: certain level of latency is required in blockchain networks to mitigate double-spending frauds. As a result, doubling the mining resources does not necessarily result in a mining speed twice as fast. In this paper, we propose a novel -parameterized function to model :
where is a system parameter. Clearly, is a concave function. Here, can be considered as a “difficulty” parameter of blockchain networks. Under security constraints, the larger the , the easier to mine a new block.
In blockchain networks, every miner selects a set of transactions from their own Mempools (or the global logical queue in our model) to generate a new block. Every miner will work on a different puzzle which is unique to the block they form. Let be the random number of blocks mined by the blockchain system in time-slot . Note that the average time the miners generate a block can be computed as . Hence, the average number of mined blocks under resources , , can be computed as:
where follows from the fact that if
are mutually independent random variables and, , then . Suppose each mined block contains a fixed number of transactions (denoted by ), then the system can serve transactions in time-slot . Meanwhile, we assume that, in each time-slot, new transactions arrive randomly at the blockchain system according to a process , where denotes the number of transactions arriving in time-slot . In this paper, we assume that and , , for some constants . Let represent queue-length (i.e., the number of transactions) in the queue in time-slot . Then, the queue-length evolution can be written as:
Iii-C Problem Formulation
In PoW-based blockchains, miners compete to solve the puzzle to mine a new block, and the successful miner will win a fix reward and flexible transaction fees . Since there are number of blocks mined by the blockchain system in time-slot , the total reward provided by the system is . We let denote the cost of miner in time-slot given resource allocation , where the cost function is assumed to be increasing, convex, and differentiable. We note that here we assume that the miner will receive the reward once he mines a new block. In practice, the miner may not receive the reward if the mined block is not finally added to the global blockchain due to branching222 By design, a blockchain could branch if different blocks are generated simultaneously. Eventually, blocks not on the main chain may be discarded. . Hence, our model can be viewed as a lower bound for the mining cost in actual blockchain networks. The problem of handling cost in branching will be left for future studies.
From a social welfare perspective, the blockchain system operator wants the miners to invest fewer total resources in mining new blocks. At the same time, each miner tries to maximize his/her own rewards. These two conflicting goals can be represented by the total net expected mining cost in time-slot (resource costs discounted by the total received rewards for the miners through the coupling of and ), which can be computed as:
Define as the time-average mining cost for all miners, i.e., .
Apart from minimizing the time-average mining cost, the blockchain system operator also wants to guarantee that the allocated mining resources at all miners are sufficient such that transactions can be processed in a timely manner and the blockchain queue is stable. In this paper, we adopt the following notion of queueing stability: We say that the blockchain queue is strongly stable if it satisfies:
that is, the stability of blockchain queue implies that the transactions in queue cannot be accumulated to infinity. Putting the above objective function and constraints together, the mining resources allocation optimization problem (MRA) can be formulated as:
where vector contains the upper bounds for all mining resources and the inequalities between vectors in the constraints are component-wise. In what follows, we will develop a dynamic resources allocation algorithm that provably achieves the optimal solution to Problem MRA.
Iv Dynamic Mining Resources Allocation Algorithm
In this section, we consider the problem of optimally allocating mining recourses to minimize the mining cost while maintaining the stability of the blockchain system. To this end, we propose a dynamic mining resources allocation algorithm in Section IV-A and present theoretical analysis of the proposed algorithm in Section IV-B.
Iv-a Dynamic Mining Resources Allocation Algorithm
We note that Problem MRA is a challenging Markov decision process problem under a dynamic system with a large solution space. To solve Problem MRA, we first present a useful lemma that shows that under mild assumptions, it suffices to only consider the class of stationary randomized policies, which significantly reduces the size of solution space to be considered.
Lemma 1 (Solution Space).
Let be an optimal mining resources allocation vector.
If Problem MRA is feasible, and the blockchain system satisfies the boundedness assumptions and the law of large number assumption,
then, for any
be an optimal mining resources allocation vector. If Problem MRA is feasible, and the blockchain system satisfies the boundedness assumptions and the law of large number assumption, then, for any, there exists a stationary randomized policy that makes mining resources allocation decisions depending only on the system state while at the same time satisfies: and , where .
The proof of Lemma 1 follows from [15, Theorem 4.5] and we omit the details here due to space limitation. Based on this insight, in what follows, we propose a dynamic mining resources allocation algorithm based on Lyapunov optimization technique to stabilize the blockchain queue and solve Problem MRA. To this end, we first define the following Lyapunov function . Hence, the Lyapunov drift can be computed as . It is known from the classical work by Tassiulas and Ephremides  that greedily minimizes induces queueing-stability of the system. Further, to minimize the mining cost while maintaining queueing stability, we incorporate the mining cost function of Problem MRA into the Lyapunov drift . That is, in every time-slot , we greedily minimize the following drift-plus-penalty expression:
where a parameter that, as will be shown later, is used to tune the tradeoff between queueing delay and the achievable mining cost. Next, we will state a basic property of the expression in (6) that will be useful in our later performance analysis:
Lemma 2 (Drift Bound).
For , the drift-plus-penalty term in (6) satisfies:
where is defined as .
Using the queue-length evolution in (3) and the definition of the conditional Lyapunov drift given , we have:
where follows from for . Adding on both sides completes the proof. ∎
Since the arrival process of new transactions is independent of , minimizing the right-hand-side (RHS) of (7) for all , i.e., the upper bound of the drift-plus-penalty expression, is equivalent to solving the following optimization problem by plugging in Eq. (2) and rearranging terms:
This observation motivates the following dynamic mining resources allocation algorithm (DMRA), which is stated in Algorithm 1 as follows.
Iv-B Performance Analysis
In this section, we will analyze our proposed dynamic mining resources allocation algorithm in Algorithm 1.
Iv-B1 Upper Bound for Mining Cost
We first show that our proposed DMRA algorithm achieves a mining cost that is guaranteed to be within a distance to the optimal of Problem MRA, and the result is stated in Theorem 1:
Let be defined as in Lemma 2. For any , if Problem MRA is feasible, then there exists an optimal solution to Problem MRA, say , such that the time-average mining cost incurred by the DMRA algorithm is upper bounded by:
Taking expectation with respect to and then summing over , we have:
Since , and , we have:
Finally, taking as , plugging in the form of , yields the upper bound. ∎
Iv-B2 Bounded Average Queue Length
Next, we consider the delay performance of our proposed DMRA algorithm. To this end, we first introduce the following assumption.
(Slater Condition). For the expectation of arrival rate and service rate, there exists a value that satisfies: .
If Problem MRA is feasible and Assumption 1 holds, then our proposed dynamic mining resources allocation algorithm stabilizes the blockchain queue, i.e., the queue is strongly stable, and the time-average queue-length satisfies:
where is the minimum mining cost.
Iv-B3 Variable for Local Optimality
Instead of using fix variable , we can gradually increase the value of while maintain the blockchain queue mean state stable. In this way, we can eliminate the term in Theorem 1.
If is used in the drift expression of (9) and satisfies , where , then it holds that
Since , , according to (9), replacing with and taking expectation with respect to , then we have:
Dividing both sides by , we have:
Summing over , we have:
Since is non-decreasing and , , then we have . Because , dividing on both sides of (IV-B3), we have Taking limsup as , we get , which concludes the proof. ∎
V Numerical Results
In this section, we conduct numerical simulations to evaluate the performance of our proposed dynamic mining resources allocation algorithm for PoW-based blockchain networks.
1) Simulation Settings: In our experiment, we consider a blockchain network with miners. Each of these miners generates transactions and mines new blocks with one CPU core. Transactions arrivals
are i.i.d. across time and uniformly distributed over the interval. In this paper, we consider two types of mining resources. The first one is CPU utilization, e.g., the CPU utilization ranges from 40% or 60%, and the CPU utilization is managed by the cpulimit tool. Higher CPU utilization means higher hashing rate and thus higher probability that the miner successfully mines a block. However, higher CPU utilization also results in a higher mining energy cost. The second type of resource is the electricity. In the simulations, we adopt the following linear cost model: where are constants. When computing the reward of each block, we use the fix value reward () and ignore random transaction fees () for simplicity. We compare our proposed DMRA algorithm with two baselines. The first baseline is denoted as MaxMining, where the system tries to make use of all available resources to mine a new block at all times. The second baseline is denoted as RandMining, where the system random allocates mining resources to mine a block. Unless otherwise mentioned, we use the following default parameter settings: The length of each time-slot is 1 minute, the total time duration of all simulations is 200 minutes. We set the tradeoff parameter . The weight for CPU utilization and electricity is set to 3 and 1, respectively. We let .
2) Performance Analysis: Figs. 4 and 4 show the mining cost and delay performance achieved by our proposed DMRA method and baselines. We observe that our dynamic mining resources allocation algorithm DMRA achieves similar delay performance as MaxMining. However, DMRA significantly reduces the mining cost compared to MaxMining. Even though RandMining could reduce the mining cost to some extent, it could not keep the blockchain queue stable since it allocates the mining resources randomly. In general, our proposed DMRA method significantly reduces the mining cost while keeping the blockchain queue stable at the same time. This is because, compared to the baselines, our proposed algorithm can dynamically allocate mining resources according to the number of transactions in the blockchain queue.
3) Impact of the Tradeoff Parameter : Figs. 6 and 6 show the tradeoff between time-average mining cost and time-average queue-length under different values of tradeoff parameter . We can observe that a larger leads to a better mining cost performance, but at the cost of incurring a larger queueing delay and vice versa. Thus, there is a tradeoff between the mining cost and the queueing delay. By tuning the value of , we can control the tradeoff between mining cost and delay in blockchain networks.
In this paper, we proposed a queueing-based analytical model to study mining resources allocation problem in PoW-based blockchain networks. In our queueing model, we did not assume the knowledge of arrivals probability distribution and agnostic to any priority mechanism. We leveraged Lyapunov optimization techniques and proposed a simple dynamic mining resources allocation algorithm, with a parameter , which can be used to tune the trade-off between mining energy cost and queueing delay. We showed that the proposed algorithm is within distance to the optimum, with a worst case delay scaling as . Our simulation results also demonstrated the effectiveness of our proposed algorithm in reducing the mining cost for blockchain networks.
-  S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” 2008.
-  R. Kamath, “Food traceability on blockchain: Walmart’s pork and mango pilots with ibm,” The Journal of the British Blockchain Association, vol. 1, no. 1, p. 3712, 2018.
-  K. J. O’Dwyer and D. Malone, “Bitcoin mining and its energy footprint,” 2014.
-  I. Bentov, R. Pass, and E. Shi, “Snow white: Provably secure proofs of stake,” IACR Cryptology ePrint Archive, vol. 2016, p. 919, 2016.
-  F. Zhang, I. Eyal, R. Escriva, A. Juels, and R. Van Renesse, “REM: Resource-efficient mining for blockchains,” IACR Cryptology ePrint Archive, vol. 2017, p. 179, 2017.
-  M. Ahmed and K. Kostiainen, “Don’t mine, wait in line: Fair and efficient blockchain consensus with robust round robin,” arXiv preprint arXiv:1804.07391, 2018.
-  A. Gervais, G. O. Karame, K. Wüst, V. Glykantzis, H. Ritzdorf, and S. Capkun, “On the security and performance of proof of work blockchains,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016, pp. 3–16.
-  S. Kasahara and J. Kawahara, “Effect of bitcoin fee on transaction-confirmation process,” arXiv preprint arXiv:1604.00103, 2016.
-  S. Ricci, E. Ferreira, D. S. Menasche, A. Ziviani, J. E. Souza, and A. B. Vieira, “Learning blockchain delays: A queueing theory approach,” ACM SIGMETRICS Performance Evaluation Review, vol. 46, no. 3, pp. 122–125, 2019.
-  D. Koops, “Predicting the confirmation time of bitcoin transactions,” arXiv preprint arXiv:1809.10596, 2018.
-  Y. Jiao, P. Wang, D. Niyato, and Z. Xiong, “Social welfare maximization auction in edge computing resource allocation for mobile blockchain,” in ICC. IEEE, 2018, pp. 1–6.
-  Z. Xiong, S. Feng, D. Niyato, P. Wang, and Z. Han, “Optimal pricing-based edge computing resource management in mobile blockchain,” in ICC. IEEE, 2018, pp. 1–6.
-  J. Li, Z. Zhou, J. Wu, J. Li, S. Mumtaz, X. Lin, H. Gacanin, and S. Alotaibi, “Decentralized on-demand energy supply for blockchain in internet of things: A microgrids approach,” IEEE Transactions on Computational Social Systems, 2019.
-  N. Dimitri, “Bitcoin mining as a contest,” Ledger, vol. 2, pp. 31–37, 2017.
-  M. J. Neely, “Stochastic network optimization with application to communication and queueing systems,” Synthesis Lectures on Communication Networks, vol. 3, no. 1, pp. 1–211, 2010.
-  L. Tassiulas and A. Ephremides, “Stability properties of constrained queuing systems and scheduling policies for maximum throughput in multihop radio networks,” vol. 37, no. 12, pp. 1936–1948, Dec. 1992.