Log In Sign Up

Fraud Proofs: Maximising Light Client Security and Scaling Blockchains with Dishonest Majorities

Light clients, also known as Simple Payment Verification (SPV) clients, are nodes which only download a small portion of the data in a blockchain, and use indirect means to verify that a given chain is valid. Typically, instead of validating block data, they assume that the chain favoured by the blockchain's consensus algorithm only contains valid blocks, and that the majority of block producers are honest. By allowing such clients to receive fraud proofs generated by fully validating nodes that show that a block violates the protocol rules, and combining this with probabilistic sampling techniques to verify that all of the data in a block actually is available to be downloaded, we can eliminate the honest-majority assumption, and instead make much weaker assumptions about a minimum number of honest nodes that rebroadcast data. Fraud and data availability proofs are key to enabling on-chain scaling of blockchains (e.g. via sharding or bigger blocks) while maintaining a strong assurance that on-chain data is available and valid. We present, implement, and evaluate a novel fraud and data availability proof system.


page 1

page 2

page 3

page 4


CoVer: Collaborative Light-Node-Only Verification and Data Availability for Blockchains

Validating a blockchain incurs heavy computation, communication, and sto...

LightSync: Ultra Light Client for PoW Blockchains

Full nodes in a blockchain network store and verify a copy of the whole ...

Generic Superlight Client for Permissionless Blockchains

We conduct a systematic study on the light client of permissionless bloc...

A Tendermint Light Client

In Tendermint blockchains, the proof-of-stake mechanism and the underlyi...

Superlight - A Permissionless, Light-client Only Blockchain with Self-Contained Proofs and BLS Signatures

Blockchain protocols are based on a distributed, public database where s...

Cryptoeconomic Security for Data Availability Committees

Layer 2 systems have received increasing attention due to their potentia...

ACeD: Scalable Data Availability Oracle

A popular method in practice offloads computation and storage in blockch...

1 Introduction and Motivation

As cryptocurrencies and smart contract platforms have gained wider adoption, the scalability limitations of existing blockchains have been observed in practice. Popular services have stopped accepting Bitcoin [26] payments due to transactions fees rising as high as $20 [28, 19], and Ethereum’s [6] popular CryptoKitties smart contract caused the pending transactions backlog to increase six-fold [40]. Users pay higher fees as they compete to get their transactions included on the blockchain, due to on-chain space being limited, e.g., by Bitcoin’s block size limit [2] or Ethereum’s block gas limit [41].

While increasing on-chain capacity limits would yield higher transaction throughput, there are concerns that this would decrease decentralisation and security, because it would increase the resources required to fully download and validate the blockchain, and thus fewer users would be able to afford to run full nodes that independently validate the blockchain, requiring users to instead run light clients that assume that the chain favoured by the blockchain’s consensus algorithm abides by the protocol rules [23]. Light clients operate well under normal circumstances, but have weaker assurances when the majority of the consensus (e.g., miners or block producers) is dishonest; for example, whereas a dishonest majority in the Bitcoin or Ethereum network can at present only censor, reverse or reorder transactions, if all clients are using light nodes, a majority of the consensus would be able to collude together to generate blocks that contain transactions that create money out of thin air, and light nodes would not be able to detect this. On the other hand, full nodes would reject those invalid blocks immediately.

As a result, various scalability efforts have focused on off-chain scaling techniques such as payment channels [31], where participants sign transactions off-blockchain, and settle the final balance on-chain. Payment channels have also been generalised to state channels [25]. However, as opening and settling channels involves on-chain transactions, on-chain scaling is still necessary for widespread adoption of payment and state channels.111Suppose a setting where all users used channels and channels only needed to be opened once and maintained with on-chain transactions once per year per used. To support a userbase equal in size to Facebook’s ( billion [27]), one would need 2.2 billion transactions per year, or transactions per second, significantly higher than supported by the Bitcoin or Ethereum blockchains [10, 44]. This does not take into account usages that require “going on-chain” more frequently, users requiring multiple channels, or the possibility of attacks on channels requiring more transactions to process.

In this paper, we decrease the on-chain capacity vs. security trade-off by making it possible for light clients to receive and verify fraud proofs of invalid blocks from full nodes, so that they too can reject them, assuming that there is at least one honest full node willing to generate fraud proofs to be propagated within a maximum network delay. We also design a data availability proof system, a necessary complement to fraud proofs, so that light clients have assurance that the block data required for full nodes to generate fraud proofs from is available, given that there is a minimum number of honest light clients to reconstruct missing data from blocks. We implement and evaluate the security and efficiency of our overall design.

Our work also plays a key role in efforts to scale blockchains with sharding [7, 1, 20], as in a sharded system no single node in the network is expected to download and validate the state of all shards, and thus fraud proofs are necessary to detect invalid blocks from malicious shards.

2 Background

2.1 Blockchain Models

Briefly, the data structure of a blockchain consists of (literally) a chain of blocks. Each block contains two components: a header and a list of transactions. In addition to other metadata, the header stores at minimum the hash of the previous block (thus enabling the chain property), and the root of the Merkle tree that consists of all transactions in the block.

Blockchain networks have a consensus algorithm [3] to determine which chain should be favoured in the event of a fork, e.g., if proof-of-work [26] is used, then the chain with the most accumulated work is favoured. They also have a set of protocol rules that dictate which transactions are valid, and thus blocks that contain invalid transactions will never be favoured by the consensus algorthim and should in fact always be rejected.

Full nodes are nodes which download block headers as well as the list of transactions, verifying that the transactions are valid according to some protocol rules. Light clients only download block headers, and assume that the list of transactions are valid according to the protocol rules. Light clients verify blocks against the consensus rules, but not the protocol rules, and thus assume that the consensus is honest. Light clients can receive Merkle proofs from full nodes that a specific transaction or state object is included in a block header.

There are two major types of blockchain transaction models: Unspent Transaction Output (UTXO)-based, and account-based. Transactions in UTXO-based blockchains (e.g., Bitcoin) contain references to previous transactions whose coins they wish to ‘spend’. As a single transaction may send coins to multiple addresses, a transaction has many ‘outputs’, and thus new transactions contain references to these specific outputs. Each output can only be spent once.

On the other hand, account-based blockchains (e.g., Ethereum), are somewhat simpler to work with (though sometimes more complex to apply parallelisation techniques to), as each transaction simply specifies a balance transfer from one address to another, without reference to previous transactions. In Ethereum, the block header also contains a root to a Merkle tree containing the state, which is the ‘current’ information that is required to verify the next block; in Ethereum this consists of the balance, code and permanent storage of all of the accounts and contracts in the system.

2.2 Merkle Trees and Sparse Merkle Trees

A Merkle tree [24] is a binary tree where every non-leaf node is labelled with the cryptographic hash of the concatenation of its children nodes. The root of a Merkle tree is thus a commitment to all of the items in its leaf nodes. This allows for Merkle proofs, which given some Merkle root, are proofs that a leaf is a part of the tree committed to by the root. A Merkle proof for some leaf consists of all of the ancestor and ancestor’s sibling intermediate nodes for that leaf, up to the root of the tree, thus forming a sub-tree whose Merkle root can be recomputed to verify that the Merkle proof is valid. The size and verification time of a Merkle proof for a tree with leaves is , as it is a tree.

A sparse Merkle tree [21, 12] is a Merkle tree with leaves where is extremely large (e.g., ), but where almost all of the nodes have the same default value (e.g., ). If nodes are non-zero, then at each intermediate level of the tree there will be a maximum of non-zero values, and all other values will be the same default value for that level: at the bottom level, at the first intermediate level, at the second intermediate level, and so on. Hence, despite the exponentially large number of nodes in the tree, the root of the tree can be calculated in time. A sparse Merkle tree allows for commitments to key-value maps, where values can be updated, inserted or deleted trivially in time. Merkle proofs of specific key-values entries are of size if constructed naively but can be compressed to size as intermediate nodes whose sibling have the default value do not need to explicitly be shown.

Systems such as Ripple and Ethereum at present use Patricia trees instead of sparse Merkle trees [41, 35]; we use sparse Merkle trees in this paper because of their greater simplicity.

2.3 Erasure Codes and Reed-Solomon Codes

Erasure codes are error-correcting codes [14, 30] working under the assumption of bit erasures rather than bit errors; in particular, the users knows which bits have to be reconstructed. Error-correcting codes transform a message of length into a longer message of length such that the original message can be recovered from a subset of the symbols.

Reed-Solomon (RS) codes [39] have various applications and are among the most studied error-correcting codes. A Reed-Solomon code encodes data by treating a length- message as a list of elements

in some finite field (prime fields and binary fields are most frequently used), interpolating the polynomial

where for all , and then extending the list with where . The polynomial can be recovered from any

symbols from this longer list using techniques such as Lagrange interpolation, or more optimized and advanced techniques involving tools such as Fast Fourier transforms, and knowing

one can then recover the original message. Reed-Solomon codes can detect and correct any combination of up to errors, or combinations of errors and erasures. RS codes have been generalised to multidimensional codes [36, 13] in various ways [37, 42, 34]. In a -dimensional code, the message is encoded into a square or cube or hybercube of size , and a multidimensional polynomial is interpolated where , and this polynomial is extended to a larger square or cube or hypercube.

3 Assumptions and Threat Model

We present the network and threat model under which our fraud proofs (Section 4) and data availability proofs (Section 5) apply.

3.1 Preliminaries

We present some primitives that we use in the rest of the paper.

  • is a cryptographically secure hash function that returns the digest of (e.g., SHA-256).

  • returns the Merkle root for a list of items .

  • denotes a Merkle proof that an element is a member of the Merkle tree committed by root .

  • returns if the Merkle proof is valid, otherwise , where additionally denotes the total number of elements in the underlying tree and is the index of in the tree. This verifies that is at index , as well as its membership.

  • denotes a Merkle proof that a key-value pair is a member of the Sparse Merkle tree committed by root .

3.2 Blockchain Model

We assume a generalised blockchain architecture, where the blockchain consists of a hash-based chain of block headers . Each block header contains a Merkle root of a list of transactions , such that . Given a node that downloads the list of transactions from the network, a block header is considered to be valid if (i) and (ii) given some validity function

where is a list of transactions and is the state of the blockchain, then must return , where is the state of the blockchain after applying all of the transactions in . We assume that takes time to execute, where is the number of transactions in .

In terms of transactions, we assume that given a list of transactions , where denotes a transaction at block , there exists a state transition function that returns the post-state of executing a transaction on a particular pre-state , or an error if the transition is illegal:

Thus given the intermediate post-states after applying every transaction one at a time, , and the base case , then . Hence, denotes the intermediate state of the blockchain at block after applying transactions .

Therefore, .

In Section 4.2, we explain how both a UTXO-based (e.g., Bitcoin) and an account-based (e.g., Ethereum) blockchain can be represented by this model.

3.2.1 Aim.

Our aim is to prove to clients that for a given block header , returns in less than time and less than space, relying on as few security assumptions as possible.

3.3 Network Model

We assume a network that consists of two types of nodes:

  • Full nodes. These are nodes which download and verify the entire blockchain. Honest full nodes store and rebroadcast valid blocks that they download to other full nodes, and broadcast block headers associated with valid blocks to light clients. Some of these nodes may participate in consensus (i.e., by producing blocks).

  • Light clients. These are nodes with computational capacity and network bandwidth that is too low to download and verify the entire blockchain. They receive block headers from full nodes, and on request, Merkle proofs that some transaction or state is a part of the block header.

We assume a network topology as shown in Figure 1; full nodes communicate with each other, and light clients communicate with full nodes, but light clients do not communicate with each other. Additionally, we assume a maximum network delay ; such that if one honest node can connect to the network and download some data (e.g., a block) at time , then it is guaranteed that any other honest node will be able to do the same at time .

Figure 1: Network model—full nodes communicate with each other, and light clients communicate only with full nodes.

3.4 Threat Model

We make the following assumptions in our threat model:

  • Blocks and consensus. Block headers may be created by adversarial actors, and thus may be invalid, and there is no honest majority of consensus-participating nodes that we can rely on.

  • Full nodes. Full nodes may be dishonest, e.g., they may not relay information (e.g., fraud proofs), or they may relay invalid blocks. However, we assume that there is at least one honest full node that is connected to the network (i.e., it is online, willing to generate and distribute fraud proofs, and is not under an eclipse attack [18]).

  • Light clients. We assume that each light client is connected to at least one honest full node. For data availability proofs, we assume a minimum number of honest light clients to allow for a block to be reconstructed. The specific number depends on the parameters of the system, and is analysed in Section 5.6.

4 Fraud Proofs

Figure 2: Overview of the architecture of a fraud proof system at a network level.

4.1 Block Structure

In order to support efficient fraud proofs, it is necessary to design a blockchain data structure that supports fraud proof generation by design. Extending the model described in Section 3.2, a block header at height contains the following elements:

The hash of the previous block header in the chain.

The root of the Merkle tree of the data (e.g., transactions) included in the block.

The number of leaves represented by .

The root of a sparse Merkle tree of the state of the blockchain (to be described in Section 4.2).

Additional arbitrary data that may be required by the network (e.g., in proof-of-work, this may include a nonce and the target difficulty threshold).

Additionally, the hash of each block header is also stored by clients and nodes.

Note that typically blockchains have the Merkle root of transactions included in headers. We have abstracted this to a ‘Merkle root of data’ called , because as we shall see in Section 4.3, as well as including transactions in the block data, we also need to include intermediate state roots.

4.2 State Root and Execution Trace Construction

To instantiate a blockchain based on the state-based model described in Section 3.2, we make use of sparse Merkle trees, and represent the state as a key-value map. We explain how both a UTXO-based and an account-based blockchain can be instantiated atop such a model:

  • UTXO-based. The keys in the map are transaction output identifiers e.g., where is the data of the transaction and is the index of the output being referred to in . The value of each key is the state of each transaction output identifier: either () or (, the default value).

  • Account-based. This is already a key-value map, where the key is the account or storage variable, and the value is the balance of the account or the value of the variable.

The state would need to keep track of all data that is relevant to block processing, including for example the cumulative transaction fees paid to the creator of the current block after each transaction.

We now define a variation of the function defined in Section 3.2, called , that performs transitions without requiring the whole state tree, but only the state root and Merkle proofs of parts of the state tree that the transaction reads or modifies (which we call “witness”, or for short). These Merkle proofs are effectively expressed as a sub-tree of the same state tree with a common root.

A witness consists of a set of key-value pairs and their associated Sparse Merkle proofs in the state tree, .

After executing on the parts of the state shown by , if modifies any of the state, then the new resulting can be generated by computing the root of the new sub-tree with the modified leafs. Not that if is invalid and does not contain all of the parts of the state required by during execution, then is returned.

Let us denote, for the list of transactions , where denotes a transaction at block , then is the witness for transaction for .

Thus given the intermediate state roots after applying every transaction one at a time, , and the base case , then . Hence, denotes the intermediate state root at block after applying transactions .

4.3 Data Root and Periods

Figure 3: Example of a 256-byte share.

The data represented by the of a block contains transactions arranged into fixed-size chunks of data called ‘shares’, interspersed with intermediate state roots called ‘traces’ between transactions. We denote as the th intermediate state root in block . It is necessary to arrange data into fixed-size shares to allow for data availability proofs as we shall see in Section 5. Each leaf in the data tree represents a share.

As a share may not contain entire transactions but only parts of transactions as shown in Figure 3, we may reserve the first byte in each share to be the starting position of the first transaction that starts in the share, or if no transaction starts in the share. This allows a protocol message parser to establish the message boundaries without needing every transaction in the block.

Given a list of shares we define a function which parses these shares and outputs an ordered list of messages , which are either transactions or intermediate state roots. For example, on some shares in the middle of some block may return .

Note that as the block data does not necessarily contain an intermediate state root after every transaction, we assume a ‘period criterion’, a protocol rule that defines how often an intermediate state root should be included in the block’s data. For example, the rule could be at least once every transactions, or bytes or gas (i.e., in Ethereum [41]).

We thus define a function which parses a list of messages, and returns a pre-state intermediate root , a post-state intermediate root , and a list of transaction such that applying these transactions on is expected to return . If the list of messages violate the period criterion, then the function may return , for example if there too many transactions in the messages to constitute a period.

Note that may be if no pre-state root was parsed, as this may be the case if the first messages in the block are being parsed, and thus the pre-state root is the state root of the previous block . Likewise, may be if no post-state root was parsed i.e., if the last messages in the block are being parsed, as the post-state root would be .

4.4 Proof of Invalid State Transition

A faulty or malicious miner may provide an incorrect . We can use the execution trace provided in to prove that some part of the execution trace was invalid.

We define a function and its parameters which verifies fraud proofs received from full nodes. If the fraud proof is valid, then the block that the fraud proof is for is permanently rejected by the client. In summary, the fraud proof verifier checks if applying the transactions in a period of the block’s data on the intermediate pre-state root results in the intermediate post-state root specified the block data. If it does not, then the fraud proof is valid.

We denote as share number in block .

(tx witnesses)

returns if all of the following conditions are met, otherwise is returned:

  1. corresponds to a block header that the client has downloaded and stored.

  2. For each share in the proof, returns .

  3. Given , the result must not be . If is , then is true, and if is , then is true.

  4. Check that applying on results in . Formally, let the intermediate state roots after applying every transaction in the proof one at a time be . If is not , then the base case is , otherwise . If is not , is true, otherwise is true.222For simplicity, we assume a model where transaction witnesses are provided for every individual intermediate state root within the trace, but it is also possible to only provide witnesses only for the trace intermediate pre-state root, and execute the transactions as a single batch.

4.5 Transaction Fees

As discussed in Section 4.2, the state would need to keep track of all data that is relevant to block processing. A block producer may attempt to collect more transaction fees than is afforded to them by the transactions in the block. In order to make this detectable by a fraud proof as part of the model we have described, we can introduce a special key in the state tree called , which represents the cumulative fees in the block after applying each transaction, and is reset to after applying the transaction where the block producer collects the fees.

5 Data Availability Proofs

A malicious block producer could prevent full nodes from generating fraud proofs by withholding the data needed to recompute and only releasing the block header to the network. The block producer could then only release the data—which may contain invalid transactions or state transitions—long after the block has been published, and make the block invalid. This would cause a rollback of transactions on the ledger of future blocks. It is therefore necessary for light clients to have a level of assurance that the data matching is indeed available to the network.

We propose a data availability scheme based on Reed-Solomon erasure coding, where light clients request random shares of data to get high probability guarantees that all the data associated with the root of a Merkle tree is available. The scheme assumes there is a sufficient number of honest light clients making the same requests such that the network can recover the data, as light clients upload these shares to full nodes, if a full node who does not have the complete data requests it. It is fundamental for light clients to have assurance that all the transaction data is available, because it is only necessary to withhold a few bytes to hide an invalid transaction in a block.

We define below soundness and agreement and analyse them in Section 5.7.

Definition 1 (Soundness)

If an honest light client accepts a block as available, then at least one honest full node has the full block data or will have the full block data within some known maximum delay where is the maximum network delay.

Definition 2 (Agreement)

If an honest light client accepts a block as available, then all other honest light clients will accept that block as available within some known maximum delay where is the maximum network delay.

5.1 Strawman 1D Reed-Solomon Availability Scheme

To provide some intuition, we first describe a strawman data availability scheme, based on standard Reed-Solomon coding.

A block producer compiles a block of data consisting of shares, extends the data to shares using Reed-Solomon encoding, and computes a Merkle root (the ) over the extended data, where each leaf corresponds to one share.

When light clients receive a block header with this , they randomly sample shares from the Merkle tree that represents, and only accept a block once it has received all of the shares requested. If an adversarial block producer makes more than 50% of the shares unavailable to make the full data unrecoverable (recall in recalled in Section 2.3 that Reed-Solomon codes allow recovery of shares from any shares), there is a 50% chance that a client will randomly sample an unavailable share in the first draw, a 25% chance after two draws, a 12.5% chance after three draws, and so on, if they draw with replacement. (In the full scheme, they will draw without replacement, and so the probability will be even lower.)

Note that for this scheme to work, there must be enough light clients in the network sampling enough shares so that block producers will be required to release more than 50% of the shares in order to pass the sampling challenge of all light clients, and so that the full block can be recovered. An in-depth probability and security analysis is provided in Section 5.6.

The problem with this scheme is that an adversarial block producer may incorrectly construct the extended data, and thus the incomplete block is unrecoverable from the extended data even if more than 50% of the data is available. With standard Reed-Solomon encoding, the fraud proof that the extended data is invalid is the original data itself, as clients would have to re-encode all data locally to verify the mismatch with the given extended data, and thus it requires data with respect to the size of the block. Therefore, we instead use multi-dimensional encoding, as described in Section 5.2, so that proofs of incorrectly generated codes are limited to a specific axis—rather than the entire data—reducing proof size to where is the number of dimensions of the encoding. For simplicity, we will only consider two-dimensional Reed-Solomon encoding in this paper, but our scheme can be generalised to higher dimensions.

We note in Section 7.1 that succinct proofs of computation could be an alternative future solution to this problem instead of multi-dimensional encoding.

5.2 2D Reed-Solomon Encoded Merkle Tree Construction

Figure 4: Diagram showing a 2D Reed-Solomon encoding. The original data is initially arranged in a matrix, which is then ‘extended’ to a matrix applying multiple times Reed-Solomon encoding.

A 2D Reed-Solomon Encoded Merkle tree can be constructed as follows from a block of data:

  1. Split the raw data into shares of size each, and arrange them into a

    matrix; apply padding if the last share is not exactly of size

    , or if there are not enough shares to complete the matrix.

  2. Apply Reed-Solomon encoding on each row and column of the matrix to extend the data horizontally and vertically; i.e., encode each row and each column. Then apply a third time a Reed-Solomon encoding horizontally, on the vertically extended portion of the matrix to create a matrix, as shown in Figure 4. This results in an extended matrix for block .

  3. Compute the root of the Merkle tree for each row and column in the matrix, where each leaf is a share. We have and , where represents the share in row , column in the matrix.

  4. Compute the root of the Merkle tree of the roots computed in step 3 and use this as . We have .

The resulting tree of has elements, where the first elements are in leaves via the row roots, and the latter half are in leaves via the column roots.

Note that although it is possible to present a Merkle proof from to an individual share, it is important to note that a Merkle tree has leaves, and the Merkle sub-trees for the row and column roots are constructed independently from . Therefore it is necessary to have a wrapper function around called with the same parameters which takes into account how the underlying Merkle tree deals with an unbalanced number of leaves; this may involve calling twice for different portions of the path, or offsetting the index.333For example, if the underlying tree simply repeats the last leaves to pad the tree to leaves, then the wrapper function may be .

The width of the matrix can be derived as . If we are only interested in the row and column roots of , rather than the actual shares, then we can assume that has leaves when verifying a Merkle proof of a row or column root.

A light client or full node is able to reconstruct from all the row and column roots by recomputing step 4. In order to gain data availability assurances, all light clients should at minimum download all the row and column roots needed to reconstruct and check that step 4 was computed correctly, because as we shall see in Section 5.5, they are necessary to generate fraud proofs of incorrectly generated extended data.

We nevertheless represent all of the row and column roots as a a single to allow ‘super-light’ clients which do not download the row and column roots, but these clients cannot be assured of data availability and thus do not fully benefit from the increased security of allowing fraud proofs.

5.3 Random Sampling and Network Block Recovery

In order for any share in the 2D Reed-Solomon matrix to be unrecoverable, then at least out of shares must be unavailable (see Theorem 5.1). Thus when light clients receive a new block header from the network, they should randomly sample distinct shares from the extended matrix, and only accept the block if they receive all shares. Additionally, light clients gossip shares that they have received to the network, so that the full block can be recovered by honest full nodes.

The protocol between a light client and the full nodes that it is connected to works as follows:

  1. The light client receives a new block header from one of the full nodes it is connected to, and a set of row and column roots . If the check is false, then the light client rejects the header.

  2. The light client randomly chooses a set of unique coordinates where and , corresponding to points on the extended matrix, and sends them to one or more of the full nodes it is connected to.

  3. If a full node has all of the shares corresponding to the coordinates in and their associated Merkle proofs, then for each coordinate the full node responds with or . Note that there are two possible Merkle proofs for each share; one from the row roots, and one from the column roots, and thus the full node must also specify for each Merkle proof if it is associated with a row or column root.

  4. For each share that the light client has received, the light client checks that is if the proof is from a row root, otherwise if the proof is from a column root then is .

  5. Each share and valid Merkle proof that is received by the light client is gossiped to all the full nodes that the light client is connected to if the full nodes do not have them, and those full nodes gossip it to all of the full nodes that they are connected to.

  6. If all the proofs in step 4 succeeded, and no shares are missing from the sample made in step 2, then the block is accepted as available if within no fraud proofs for the block’s erasure code is received (Section 5.5).

5.4 Selective Share Disclosure

If a block producer selectively releases shares as light clients ask for them, up to shares, they can violate the soundness property (Definition 1) of the clients that ask for the first out of shares, as they will accept the blocks as available despite them being unrecoverable.

This can be alleviated if one assumes an enhanced network model where a sufficient number of honest light clients make requests such that more than shares will be sampled, and that each sample request for each share is anonymous (i.e., sample requests cannot be linked to the same client) and the distribution in which every sample request is received is uniformly random, for example by using a mix net [9]. As the network would not be able to link different per-share sample requests to the same clients, shares cannot be selectively released on a per-client basis.

We thus assume two network connection models that sample requests can be made under, which we will analyse in the security analysis:

  • Standard model. Sample requests are linkable to the clients that made them, and the order that they are received is predictable (e.g., they are received in the order that they were sent).

  • Enhanced model. Different sample requests cannot be linked to the same client, and the order that they are received by the network is uniformly random with respect to other requests.

5.5 Fraud Proofs of Incorrectly Generated Extended Data

If a full node has enough shares to recover a particular row or column, and after doing so detects that recovered data does not match its respective row or column root, then it must distribute a fraud proof consisting of enough shares in that row or column to be able to recover it, and a Merkle proof for each share. In summary, the fraud proof verifier checks that (i) all of the shares given by the prover are in the same row or column and (ii) that the recovered row or column does not match the row or column root in the block.

We define a function that verifies these fraud proofs, where . These proofs can also be verified by ‘super-light’ clients as they do not assume any knowledge of the row and column roots. We denote and as row or column boolean indicators; for rows and for columns.

(row or column root)
(row or column indicator)

Let be a function that takes a list of shares and their positions in the row or column , and the length of the extended row or column . The function outputs the full recovered shares or if the shares are unrecoverable.

returns true if all of the following conditions are met:

  1. corresponds to a block header that the client has downloaded and stored.

  2. If (row root), returns .

  3. If (col. root), returns .

  4. For each , returns true, where is the expected index of the in the data tree based on assuming it is in the same row or column as . See Appendix 0.B for how can be computed.

    Note that full nodes can specify Merkle proofs of shares in rows or columns from either the row or column roots e.g., if a row is invalid but the full nodes only has Merkle proofs for the row’s share from column roots. This also allows for full nodes to generate fraud proofs if there are inconsistencies in the data between rows and columns e.g., if the same cell in the matrix has a different share in its row and column trees.

  5. is false.

If for returns , then the block header is permanently rejected by the light client.

5.6 Sampling Security Analysis

We present how the data availability scheme presented in Section 5 can provide lights clients with a high level of assurance that block data is available to the network.

5.6.1 Minimum Unavailable Shares for Unrecoverability

Theorem 5.1 states that data is unrecoverable if a malicious block proposer withholds shares of at least columns or rows; which makes a total of shares to withhold.

Figure 5: Graphical interpretation of Theorem 5.1. Data is unrecoverable if at least columns (or rows) have each at least unavailable shares.
Theorem 5.1

Given a matrix as show in Figure 4, data is unrecoverable if at least columns or rows have each at least unavailable shares. In that case, the minimum number of shares that must be unrecoverable is .


Suppose a malicious block producer wants to make unrecoverable a share of the matrix . Recall that Reed-Solomon encoding allow to recover all shares from any shares; the block producer will have to (i) make unrecoverable at least shares from the row , and (ii) make unrecoverable at least shares from the column .

Let us start from (i); the block producer withholds at least shares from row . However, each of these withheld shares can be recovered from the available shares of their respective columns , …, . Therefore, the block producer will also have to withhold at least shares from each of these columns. This gives a total of shares to withhold. Note that at this point, there are not enough shares left in the matrix to recover any of the shares of columns .

Let us now consider (ii); the block producer withholds at least shares from the column to make unrecoverable the share . As before, each shares can be recovered from the available shares of their respective row . Therefore, the block producer will also have to withhold at least at least shares from each of these rows. As before, this also gives a total of shares to withhold.

However, (i) is equivalent to (ii) by the symmetry of the matrix, and are actually operating on the same shares; executing (i) on matrix is equivalent to execute (ii) on the transposed of the matrix .

5.6.2 Unrecoverable Block Detection

Theorem 5.2 states the probability that a single light client will sample at least one unavailable share in a matrix with the minimum unavailable shares for unrecoverability, thus detecting that a block may be unrecoverable.

Theorem 5.2

Given a matrix as shown in Figure 4, where shares are unavailable. If one player randomly samples shares from , the probability of sampling at least one unavailable share is:


We start by assuming that the matrix contains unavailable shares; If the player performs trials (), the probability of finding exactly zero unavailable share is:


The numerator of Equation 2 computes the number of ways to pick chunks among the set of unavailable shares (i.e., ). The denominator computes the total number of ways to pick any samples out of the total number of samples (i.e., ).

Then, the probability of finding at least one unavailable share can be easily computed from Equation 2:


which can be re-written as Equation 1 by setting .

Figure 6: Plot of Equation 1—variation of the probability with the number of sampled shares () (computed for and ).
Figure 7: Variation of the shares size with the size of the matrix ().

Figure 6 shows how this probability varies with samples for and ; each light client samples at least one unavailable share with about probability after 3 samplings (i.e., after querying respectively of the block shares for and of the block shares for ), and with more than probability after 15 samplings (i.e., after querying respectively of the block shares for and of the block shares for ). Figure 7 shows that light clients would have to download about 3.6 KB of shares to be able to detect incomplete blocks with more than probability for , and about 57 bytes of shares for .

Equation 6 shows a noticeable result: the probability is almost independent of for large values of ; it is therefore convenient to have a large matrix size (i.e., ) as this reduces the amount of data that light clients have to download.


Under the enhanced model described in Section 5.4, a malicious block producer could statistically link light clients based on the shares they query; i.e., assuming that a light client would never request twice the same share, a block producer can deduce that any request for the same share comes from a different client. To mitigate this problem, light clients could sample without replacement by performing the procedure for sampling with replacement multiple times, and only stop when they have sampled unique values.

5.6.3 Multi-Client Unrecoverable Block Detection

Theorem 5.3 captures the probability that more than out of light clients sample at least one unavailable share in a matrix with the minimum unavailable shares for unrecoverability.

Theorem 5.3

Given a matrix as shown in Figure 4, where shares are unavailable. If players randomly sample shares from , the probability that more than players sample at least one unavailable share is:


where is given by Equation 1.


We start by computing the probability that exactly players sample at least one unavailable share; this probability is given by the binomial probability mass function: