Compact Merkle Multiproofs

02/18/2020 ∙ by Lum Ramabaja, et al. ∙ 0

The compact Merkle multiproof is a new and significantly more memory-efficient way to generate and verify sparse Merkle multiproofs. A standard sparse Merkle multiproof requires to store an index for every non-leaf hash in the multiproof. The compact Merkle multiproof on the other hand requires only k leaf indices, where k is the number of elements used for creating a multiproof. This significantly reduces the size of multirpoofs, especially for larger Merke trees.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In this paper we will introduce the compact Merkle multiproof, a more efficient way to compute and transmit sparse Merkle multiproofs [1]. To understand how the compact Merkle multiproof works, we have to understand first how Merkle trees function, and what sparse Merkle multiproofs are. This is why in this brief introduction, we are going to briefly explain both concepts, before continuing with the compact Merkle multiproof algorithm.

I-a Merkle trees

A Merkle tree is a binary tree in which all leaf nodes (i.e. the Merkle tree’s elements) are associated with a cryptographic hash, and all none-leaf nodes are associated with a cryptographic hash, that is formed from the hashes of its child nodes (as shown in figure 1).

Fig. 1: Depiction of a Merkle tree. The leaf nodes, i.e. the elements of a Merkle tree, are written as ’. The non-leaf nodes are written as .

The Merkle tree is a data structure that allows for bandwidth-efficient and secure verification of elements in a list. It is used to verify the presence of elements in and between computers, without having to send the whole list of elements to another computer. Merkle trees have found a variety of uses cases: They are used in peer-to-peer systems to verify if the integrity of data blocks [2], for batch signing of time synchronisation requests [3], for transaction verification in blockchain systems [4], and more.

To verify that an element is present in the Merkle tree, a series of hashes are provided. the series of hashes is also known as a Merkle proof. By sequentially hashing an element hash with the provided Merkle proof, one can recreate the the Merkle root of the Merkle tree (as shown in figure 2). If an element is present in a list and the Merkle proof is correct, then the end result of the sequential hashing will be the Merkle root. The recipient of the Merkle proof thus already has to have a copy of the Merkle root, before verifying the integrity of a Merkle proof. As an example, by periodically storing Merkle roots, a verifier will be able to prove that some data is still unaltered.

Fig. 2: Depiction of a Merkle tree, with the Merkle proof (shown in orange) for a given element ().

I-B Sparse Merkle Multiproofs

A sparse Merkle multiproof (not to be confused with sparse Merkle trees) is a more efficient Merkle proof, for when it is necessary to prove the presence of a multiple elements that are in the same Merkle tree [1]. Let’s take figure 3 as an example to better understand what this means. To prove that three different elements are present in a Merkle tree, we could compute three separate Merkle proofs and verify the presence of each element separately. In the provided example, a node would need nine hashes in total to verify the presence of three elements.

Fig. 3: Three Merkle proofs for three different elements.

By using a sparse Merkle multiproof however, we can drop the number of hashes significantly. When overlapping the three the Merkle proofs from figure 3 (as shown in figure 4), we can see that many of the hashes can in fact be recreated by previous hashes. Instead of using three separate Merkle proofs that consist of nine hashes in total, one can prove the presence of the three elements with only four hashes (as shown in figure 5. This simple trick is also known as a sparse Merkle multiproof.

Fig. 4: Three overlapped Merkle proofs.
Fig. 5: An illustration of a Merkle multiproof.
Fig. 6: Table taken from Jim McDonald’s wonderful article "Understanding sparse Merkle multiproofs" [1]. Space saving for Merkle pollards and spare Merkle multiproofs over simple Merkle proofs.

Using sparse Merkle multiproofs over standard Merkle proofs can have enormous space savings in certain scenarios, as shown in figure 6. There is however one important problem with current sparse Merkle multiproof implementations that we thought needs addressing, today’s implementations require additional data besides the multiproof [1]. Today’s sparse Merkle multiproofs require to store the hash indices for every non-leaf node. In other words, for every hash in a multiproof, we need an index to figure out the order of computations in order to reconstruct a given Merkle root. One could argue that the necessity for additional data defeats the purpose of using a sparse Merkle multiproof, or at least significantly limits its potential. This precise issue is what the compact Merkle multiproof solves.

Ii The Compact Merkle Multiproof

The compact Merkle multiproof is a special technique to generate and verify sparse Merkle multiproofs, without the need for non-leaf index information. A standard sparse Merkle multiproof requires to store an index for every non-leaf hash in the multiproof, the compact Merkle multiproof on the other hand requires only leaf indices (or in the case of the Bloom tree [5], Bloom filter chunks), where is the number of elements used for creating a multiproof. This significantly reduces the size of multirpoofs, especially for larger Merke trees.

In the next subsections we will explain how to generate and verify a compact Merkle multiproof for a Merkle tree. It is important to note that the compact Merkle multiproof technique works with other kinds of Merkle trees as well, such as Bloom trees, sparse Merkle trees, sorted Merkle trees, etc. We are going to take figure 7 and 8 as references to better understand how compact Merkle multiproofs are generated and verified.

Fig. 7: An illustration of the compact Merkle multiproof generation procedure. Orange boxes represent the hashes of the Merkle proof. Blue boxes represent the indices of in every Merkle layer. In each iteration, we append hashes to , until the tree root is reached. For a more detailed description of the procedure, refer to subsection II-A1.
Fig. 8: An illustration of the compact Merkle multiproof verification procedure. Orange boxes represent the hashes of the Merkle proof. Blue boxes represent the indices of in every Merkle layer. In each iteration, we hash elements between the element hashes in and until no element is left in . For a more detailed description of the procedure, refer to subsection II-A2.

Ii-a Compact Merkle Multiproof for Merkle Trees

Ii-A1 Compact Merkle Multiproof Generation

Every leaf node has an index from 0 to , where is the total number of leaves in a Merkle tree. We first determine the index for every element that takes part in the multiproof. In the case of our example in figure 7, that would be indices . Let’s name this array and let’s name the "Merkle layer" on which we operate as (The Merkle Layer at the beginning are simply the leaf nodes of the tree). After determining these indices, we run the following steps recursively until termination:

  1. For each of the indices in , take the index of its immediate neighbor in layer , and store the given element index and the neighboring index as a pair of indices (an "immediate neighbor" is the leaf index right next to a target leaf index that shares the same branch). In the first iteration of our example in figure 7, we end up with an array of the form . Let’s name this array

  2. Remove any duplicate from . In the first iteration of our example in figure 7, would end up to be of the form . In figure II-A1 we refer to this as .

  3. Take the difference between the set of indices in and and append the hash values for the given indices, for the given Merkle layer to the multirpoof . In the first iteration of our example in figure 7, we would end up with the indices , which are the indices for and . Append and to the multiproof .

  4. We take all the even numbers from , and divide them by two. We assign the newly computed numbers to . In the first iteration of our example in figure 7, would end up to be of the form .

  5. Go one layer up the tree. Assign that layer to . Each layer in the tree is indexed from 0 to , where is the size of that layer.

  6. Repeat the above steps with the newly assigned variables and until you reach the root of the tree.

The proof at the end must contain the indices of the elements used for the multiproof, as well as the gathered hashes inside .

Ii-A2 Compact Merkle Multiproof Verification

For a compact Merkle multiproof verification, we require the indices of the elements used for a multiproof (Let’s name this array ), the corresponding hashes for the elements used for a multiproof, as well as the hashes of the multiproof . Array in case of our example 8 would be . We first need to sort the element hashes in increasing order according to . Let’s name this sorted array . To verify a generated multiproof, we run the following steps recursively until termination:

  1. For each of the indices in , take the index of its immediate neighbor, and store the given element index and the neighboring index as a pair of indices (an "immediate neighbor" is the leaf right next to a target leaf that shares the same branch). In the first iteration of our example in figure 8, we would end up with an array of the form . Let’s name this array .

  2. will always have the same size as . After computing , we check for duplicate index pairs inside it. If two pairs are identical, we hash the corresponding values (that have the same indices) inside with one another. If an index pair has no duplicates, we hash the corresponding value inside with the first hash inside . If a value inside was used, we remove it from . All the newly generated hashes are assigned to a new that will be used for the next iteration.

  3. We take all the even numbers from , and divide them by two. We assign the newly computed numbers to . In the first iteration of our example in figure 8, would end up to be of the form

  4. Repeat the above steps until has no elements anymore.

At the end of this procedure, will have a single value, the Merkle root of the tree. If the final value is not equal to the stored Merkle root, the verifier knows that the proof is invalid.

Iii Conclusion

We showed a new way how to compute more memory-efficient Merkle multiproofs. The compact Merkle multiproof can generate and verify sparse Merkle multiproofs, without the need for non-leaf index information. A standard sparse Merkle multiproof requires to store an index for every non-leaf hash in the multiproof, the compact Merkle multiproof on the other hand requires only leaf indices, where is the number of elements used for creating a multiproof. This significantly reduces the size of multirpoofs, especially for larger Merke trees. The compact Merkle multiproof technique can be applied to a various Merkle tree variants, such as the Bloom tree, sparse Merkle tree, etc.

Iv Future Work

We have an implementation of the compact Merkle multiproof for our Bloom tree package (which can be found on the Bloom Lab’s github page). In future work, we are going to show how one can combine Bloom trees that use compact Merkle multiproofs, with distributed Bloom filters [6] to create an ”interactive Boom proof”. We will show how the interactive Bloom proof can be used to build a new kind of blockchain architecture, that requires one magnitude less storage, while still allowing nodes to independently verify transaction validity. The efficiency of the compact Merkle multiproof procedure will play an integral part in this setup.

References