Publicly Auditable MPC-as-a-Service with succinct verification and universal setup

In recent years, multiparty computation as a service (MPCaaS) has gained popularity as a way to build distributed privacy-preserving systems. We argue that for many such applications, we should also require that the MPC protocol is publicly auditable, meaning that anyone can check the given computation is carried out correctly – even if the server nodes carrying out the computation are all corrupt. In a nutshell, the way to make an MPC protocol auditable is to combine an underlying MPC protocol with verifiable computing proof (in particular, a SNARK). Building a general-purpose MPCaaS from existing constructions would require us to perform a costly "trusted setup" every time we wish to run a new or modified application. To address this, we provide the first efficient construction for auditable MPC that has a one-time universal setup. Despite improving the trusted setup, we match the state-of-the-art in asymptotic performance: the server nodes incur a linear computation overhead and constant round communication overhead compared to the underlying MPC, and the audit size and verification are logarithmic in the application circuit size. We also provide an implementation and benchmarks that support our asymptotic analysis in example applications. Furthermore, compared with existing auditable MPC protocols, besides offering a universal setup our construction also has a 3x smaller proof, 3x faster verification time and comparable prover time.



There are no comments yet.


page 1

page 2

page 3

page 4


Performance Evaluation of Secure Multi-party Computation on Heterogeneous Nodes

Secure multi-party computation (MPC) is a broad cryptographic concept th...

A Verifiable Multiparty Computation Solver for the Assignment Problem and Applications to Air Traffic Management

The assignment problem is an essential problem in many application field...

Adaptive Gap Entangled Polynomial Coding for Multi-Party Computation at the Edge

Multi-party computation (MPC) is promising for designing privacy-preserv...

Machine-checked ZKP for NP-relations: Formally Verified Security Proofs and Implementations of MPC-in-the-Head

MPC-in-the-Head (MitH) is a general framework that allows constructing e...

Circuit-Free General-Purpose Multi-Party Computation via Co-Utile Unlinkable Outsourcing

Multiparty computation (MPC) consists in several parties engaging in joi...

Sub-logarithmic Distributed Oblivious RAM with Small Block Size

Oblivious RAM (ORAM) is a cryptographic primitive that allows a client t...

Validator election in nominated proof-of-stake

Polkadot is a decentralized blockchain platform to be launched in 2020. ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction:

The past few years have seen increasing interest in the Secure-Multiparty-Computing-as-a-Service (MPCaaS) model. MPCaaS is a distributed system where a quorum of servers provide confidential computing service to clients. All its security guarantees (including confidentiality, integrity, and optionally availability), rely on an assumption that at least some of the servers are honest (either a majority of the servers or even just one, depending on the protocol). This model is flexible and well-suited to a range of applications including auctions and digital asset trading [1, 2], anonymous messaging systems [3, 4], computing statistics on confidential demographics data [5, 6], and for trusted parameter generation in other cryptography applications [7].

An important research focus in making MPCaaS practical has been to reduce the necessary trust assumptions to a minimum. Malicious-case security for confidentiality and integrity guarantees has now become a standard feature of most implementations [8, 9, 10, 11], and protocols like HoneybadgerMPC [4] and Blinder [12] furthermore guarantee availability in this setting as well.

The need for public auditability. The present work aims to reduce the trust assumptions for practical MPCaaS even further. Our focus is publicly auditable MPC, which can best be understood as a form of graceful degradation for MPC security properties, as summarized in Table I. In the ordinary MPC setting, both confidentiality (Conf) and integrity (Int) only hold when the the number of corrupted parties is less than a threshold ; in Robust MPC [4][12], availability (Avail) holds under these conditions too. Let refer to the number of parties actually corrupted, such that means the ordinary assumptions fail to hold. Auditable MPC enables anyone to verify the correctness of the output, ensuring that the integrity guarantees hold even when . Note that this notation describes equally well both the honest majority setting or (like Viff [13], HoneyBadgerMPC[4], HyperMPC [11], or any Shamir sharing based MPC), as well as the dishonest majority setting (like SPDZ [14] and related protocols). The complementary relation between auditability and other MPC qualities is summarized in Table I. To give more context, the integrity guarantee is that the computation, if it completes, is performed correctly, i.e. the correct function is applied to the specified inputs. For blockchain applications which use SNARK, it is important ensure the setup ceremony[7] for parameter sampling is carried out correctly even if all participants are compromised. As another example, in an digital asset auction, we would want to know that the quantity of digital assets is conserved. Note that in these applications, integrity may matter even to users who did not themselves provide input (randomness or bids) to the service. Assuming a robust offline phase[12]( Section 2.2), we show a construction of robust auditable MPC.

non-robust MPC Conf, Int
non-robust auditable MPC Conf, Int Int
robust MPC Conf, Int, Avail
robust auditable MPC Conf, Int, Avail Int
TABLE I: Graceful Degradation of MPC protocols depending on number of actual faults , versus                                                                                                                                     fault tolerance parameter

Auditable MPC with one-time trusted setup. In a nutshell, auditable MPC is built from an underlying non-auditable MPC, composed with commitments and zero-knowledge proofs [15, 16]. The resulting arrangement is illustrated in Figure 1. In addition to providing input to the servers, clients also publish commitments to their inputs to a public bulletin board that can be realized by a blockchain. The servers, in addition to computing MPC on the secret shared data, also produce a proof that resulting output is computed correctly. Any auditor can verify the proof against the input commitments to check the output is correct.

The initial version of auditable-MPC by Baum[16] examines the entire protocol transcript to audit the computation. Later Veeningen showed how to construct an efficient auditable MPC from an adaptive Commit-and-Prove zk-SNARK [17] (CP-SNARK) based on Pinocchio [18] which is more efficient than [16]. Adaptive roughly means that it is secure even when the relations are chosen after the inputs (statements) are committed to. However, like many SNARKs, Pinocchio relies on a difficult to carry out trusted setup to generate parameters. [19, 20, 21]. Since Pinocchio’s trusted setup depends on the particular circuit, there is no way to update the program once the setup is complete. Any bug fix or feature enhancement to the MPC program would require performing the trusted setup ceremony again.

The goal of our work is to remove this barrier to auditable MPC, by enabling a single trusted ceremony to last for the lifetime of a system, even if the programs are dynamically updated. Our approach makes use of recent advances in zk-SNARKs, especially the Marlin zk-SNARK [22], and adapts it to the auditable MPC setting.

Technical challenges and how we overcome them. First, to summarize our approach, we follow Veeningen and build auditable MPC from an adaptive CP-SNARK. To achieve this, we follow the generic framework of LegoSNARK [23], and compose several existing CP-SNARK gadgets, namely ones for sum-check [24], linear relations, and opening of polynomial commitments, resulting in a new construction we call Adaptive Marlin.

From Adaptive Marlin, to build an auditable MPC requires two more steps. First, the prover algorithm must be replaced with a distributed MPC alternative, leaving the verifier routine essentially the same. Fortunately this turns out to be straightforward; Adaptive Marlin supports distributed computation in a natural way, and the soundness proof remains intact for the verifier. Second, we must provide a way to combine the input commitments contributed by different clients. This poses a greater challenge; in particular, Veeningen’s approach to combining input commitments does not work in Marlin since it makes use of the circuit-dependent structure of the Pinocchio CRS.

Our solution is based on a new primitive, polynomial evaluation commitments(PEC)4

, polynomial commitments that can be assembled in a distributed fashion. Each party contributes one evaluation point and they jointly compute a commitment to the resulting interpolated polynomial. This primitive serves as a bridge between LegoSNARK and Veeningen’s auditable MPC.

Fig. 1: Auditable MPC-as-a-Service with one-time trusted setup. Clients with inputs post commitments and query to the bulletin board. The servers produce output commitments and SNARK proof onto the bulletin board. The auditor collects the data from bulletin board and verifies the correctness execution of

To summarize our contributions:

- Adaptive zk-SNARK with universal reference string and constant verification time. Adaptive Marlin is the first adaptive zk-SNARK for general arithmetic circuits that has verification time, proofs and relies on a universal reference string. This is an asymptotic improvement over LegoUAC, the only known adaptive zk-SNARK with universal reference string [23], which has sized proofs and verification time where is the size of the circuit.

- Auditable Reactive MPC with one-time trusted setup. Informally, reactive MPC is a type of MPC where the computations to perform may be determined dynamically, even after inputs are provided. By constructing Auditable MPC based on Adaptive Marlin, we avoid the need to run a new trusted setup each time a new program is defined, removing an important obstacle to deployment. We provide our formal security analysis using the same ideal functionality setting as Veeningen, except that we go further in considering the full universal composability environment.

- Implementation. We implement and evaluate our auditable MPC construction. In our experiments with 32 MPC servers, over 1 million constraints, and 8 statement size, our prover time is about 678 seconds, auditing time less than 40ms, proof size is Kb, total MPC communication overhead is a constant Kb with five additional rounds of communication. As an additional contributions, we also implement and evaluate sample application workloads, including an auction and a statistical test (logrank). As a representative figure, in the auction application with 125 bidders (an R1CS with 9216 constraints), with 32 MPC servers, the auditor time is about 50ms, while the time to compute the proof is about 20 seconds, or a total of 4 minutes when including the underlying MPC computations — overall the auditable MPC is an overhead of 10% compared to plain (non-auditable) MPC.

In terms of performance, despite not relying on a circuit specific setup, our prover time is comparable to Veeningen’s. In some settings, our auditor time and proof size show asymptotic and concrete improvements. For applications where each client contributes only a small input, our auditor has a constant pairing cost and constant proof size, which is asymptotically better than Veeningen’s construction that has linear pairing cost and linear proof size in terms of number of clients (input commitments).

2 Preliminaries

2.1 Notation

Let where are groups of a prime order , generates , generates , and is a (non-degenerate) bilinear map with a security parameter . Bold letters like

denotes a vector of elements

. denotes the cardinality of , while denotes the number of non-zeros elements when is a matrix. denotes the element wise product of and . denotes the finite field of prime order (usually we leave implicit and write ), denotes a polynomial of degree at most . If is a function from , where then denotes a low degree extension of (the smallest polynomial that matches over all of ).

2.2 Secret Sharing and MPC

Secure multi-party computation (MPC) enables parties to jointly compute a function over secret shared inputs, while keeping those inputs confidential — only disclosing the result of the function.

We present our construction for Shamir Secret Sharing (for honest majority MPC), although it is also compatible with other linear secret sharing such as SPDZ (for dishonest majority MPC). For prime and a secret denotes Shamir Secret Sharing [25](SSS) in a () setting. We omit the superscript and/or subscript when it is clear from context. For a concrete instantiation in our benchmarks (Section 6), we assume a robust preprocessing MPC using Beaver multiplication [26] and batch reconstruction [27, 28], similar to HoneyBadgerMPC [4]

2.3 Extractable Commitments

Our construction for auditable MPC relies on a stronger variant of commitment schemes known as extractable trapdoor commitments. For space, we define these in the appendix.

2.4 Polynomial Commitments

Polynomial commitments [29] allow a prover to commit to a polynomial, and later reveal evaluations of the polynomial and prove they are correct without revealing any other information about the polynomial. Following Marlin [22], we define Polynomial Commitments(PC) over by a set of algorithms PC = . We only state the definition for creating a hiding commitment to a single polynomial for a single evaluation point with only one maximum degree(Omitting the ) bound that is necessary for our application.

2.4.1 Polynomial Commitment Definitions

  • [leftmargin=10pt]

  • On input a security parameter (in unary), and a maximum degree bound , samples some trapdoor and outputs some public parameters for supporting maximum degree bound .

  • Given input committer key , univariate polynomial over a field , outputs commitments to the polynomial using randomness .

  • On inputs , uni-variate polynomial over a field , a query point , outputs an evaluation and evaluation proof . The used must be consistent with the one used in .

  • : On input reciever key , commitment , a query , claimed evaluation at and evaluation proof , outputs 1 if attests that all the claimed evaluation corresponding the committed polynomial.

Note that we will later refer to this definition as ”plain” polynomial commitments, in comparison to the polynomial evaluation commitments (Section  4).

2.4.2 Construction in AGM

We next describe the Polynomial Commitment construction from Marlin, which is a variation of KZG [29] adapted for the Algebraic Group Model (AGM) [30]. In particular it relies on a pairing-based group with the Strong Diffie-Hellman assumption (SDH), for which a formal definition is given in the Appendix C.

  • [leftmargin=10pt]

  • Upon input and , Setup samples random elements in and outputs and where is sampled as follows:

  • : On input , univariate polynomial and randomness , operates as follows: If , abort. Else, sample a random polynomial of deg() according to randomness . Output

  • : On inputs , uni-variate polynomial over a field , a query point , outputs an evaluation and evaluation proof as follows: Compute and and set , .

  • : On input receiver key , commitment , a query , claimed evaluation at and evaluation proof , outputs as

2.5 zkSNARKs for R1CS Indexed Relations:

A zkSNARK is an efficient proof system where a prover demonstrates knowledge of a satisfying witness for some statement in an NP language. We focus on zkSNARKs for a generic family of computations, based on R1CS relations, a well known generalization of arithmetic circuits. For performance, we are interested in succinct schemes where the proof size and verification time are sublinear (or indeed constant, as with our construction) in the number of gates or constraints.

Following Marlin, we define indexed relations as a set of triples (, , ) where is the index, is the statement instance and w is the corresponding witness. The corresponding language is then defined by the set of pairs for which there exists a witness such that ((, ), ) . In standard circuit satisfaction case, the corresponds to the description of the circuit, corresponds to the partial assignment of wires (also known as public input) and corresponds to the witness.

2.6 Universal Structured Reference Strings

The vast majority of zkSNARK schemes rely on a common reference string crs

, which must be sampled from a given distribution at the outset of the protocol. In a perfect world, we would only need to sample reference strings from the uniform distribution over a field (a

urs), in which case it can be sampled using public randomness [31]. However, most practical SNARKs require sampling the reference string from a structured distribution (an srs), which requires a (possibly distributed) trusted setup process [19, 21, 20].

As a practical compromise, we aim to use a universal structured reference string (u-srs), which allows a single setup to support all circuits of some bounded size. A deterministic or public coin procedure can specialize the trusted setup to a given circuit. This avoids the need to perform the trusted setup each time a new circuit is desired. Some u-srs constructions (like the one we use) are also updatable[32], meaning an open and dynamic set of participants can contribute secret randomness to it indefinitely. Throughout this paper, we refer to u-srs as srs as the universality is clear from the context.

A zkSNARK with an srs is a tuple of algorithms . The setup samples the srs, supporting arbitrary circuits up to a fixed size. The indexer is a deterministic polynomial-time algorithm that uses srs and circuit index satisfying the srs constraint bound, outputs an index proving key and a verification key . The Prover uses to provide a proof for indexed relation . The Verifier then checks using .

2.6.1 Review of Marlin’s construction

As our construction closely builds on Marlin, we reuse most of its notation, and review its construction here. The Marlin construction is centered around an interactive “holographic proof” technique [33], combined with polynomial commitments and Fiat-Shamir to make it non-interactive. In a holographic proof, the verifier does not receive the circuit description as an input but, rather, makes a small number of queries to an encoding of it. This deterministic algorithm responsible for this encoding is referred to as the indexer . Marlin focuses on the setting where the encoding of the circuit description and the proofs consist of low-degree polynomials. Another way to look at this is that this imposes a requirement that honest and malicious provers are “algebraic” (See Appendix C.2).

In brief, the Marlin protocol proceeds in four rounds, where in each round the verifier sends a challenge and prover responds back with one or more polynomials; after the interaction, probabilistically queries the polynomials output by the indexer as well as those polynomials output by the prover , and then accepts or rejects. The verifier does not receive circuit index as input, but instead queries the polynomials output by that encode . For our construction, we require a MPC version of Marlin protocol shown in Appendix G.

3 Overview of Our Construction

3.1 Motivating application: Auction

We start by explaining an auction application that we use as a running example throughout. We envision a distributed service that accepts private bids from users, and keeps a running tally of the current best price, but both the bids and the price are only stored in secret shared form. Finally after all users have submitted bids, the servers publish the winning price. This application can be summarized with the following two procedures, where secret sharing notation indicates that is confidential:

  • [leftmargin=4pt]

  • Initialize state:

  • ProcessBids(inputs: from , state:):

    • for each that has not been processed:

      • return

  • Finalize(inputs: , state=:

    • return

Note that we write our example to process arbitrary-size batches of user-submitted bids at a time. This is to illustrate the flexibility of our construction, since it supports reactive computations (each computation can provide public output as well as secret shared output carried over to the next operation) as well as support for large circuits. In our example, the Finalize procedure also discloses the current state. In general, each procedure can be characterized by the following quantities: , the total size of secret inputs; , the number of distinct clients providing input in each invocation; and , the total number of gates needed to express the procedure as an arithmetic circuit. In our example, each ProcessBid invocation receives constant-size bids from different parties, so , and the circuit comprises a comparison for each bid, so where is the number of gates for each comparison subcircuit. Since we are building auditable MPC from SNARKs, we are primarily interested in witness succinctness, meaning the verification cost is independent of the circuit size , although it will in general depend on and (as we explain more in Section 5). When the verification cost is also independent of , we call it statement succinct.

3.2 System overview of Auditable MPCaaS

Auditable MPC is a distributed system architecture for performing secure computations over inputs provided by clients. The computation is organized into several phases. For simplicity we describe these as occurring one after the other, though in the general (reactive) setting each phase can occur multiple times and may run concurrently with each other.

One-time Setup Phase. The offline phase of our auditable MPC consists of two components. First is the one-time setup for the underlying SNARK and client input commitment scheme; this setup needs only be carried out once, regardless of the circuit programs to evaluate. The second is translating a circuit description into an index format. This is deterministic and anyone can publicly recompute and check this computation.

Commit Inputs. We have data client parties which provide inputs to the computation. The data-input parties provide commitments of their input on the bulletin board for availability.

Define Program. The input party which provides the computation function . We model this as a separate party, but in general this would be chosen through a transparent process, such as through a smart contracts. Marlin Indexed circuit generated indexer prover and verification keys: The indexer should be run every time there is a request for a new computation indexing or an update to an existing computation.

MPC Pre-processing. In order to facilitate fast online multiplication of MPC servers, it is typically necessary to prepare offline Beaver triples and random element shares [26].

Compute phase. Next, data clients post secret shares their input values to the MPC servers. The online phase includes interaction between MPC servers to compute the desired user function and generation of proof of correct execution. The auditor can collect the proofs from the bulletin board and verify that the computation was carried out correctly. Figure 1 shows the high-level overview describing the online phase of auditable MPC. The servers carry out MPC protocols to compute the function , where is the public output and is a commitment to a secret output along with a SNARK proof .

3.2.1 Audit phase

The auditor receives the output and verifies that it is correct. Finally, the auditor verifies computation was carried out correctly by collecting all input commitments , secret outputs , public outputs and the proof . To completely audit the computation, one would need to verify the MPC pre-processing, circuit indexing along with execution proof. We only consider the costs for verifying proofs because the indexing cost be amortized over multiple uses and because we operate robust offline pre-processing model.

4 Polynomial Evaluation Commitments

The main building block for our auditable MPC construction is a new variant of polynomial commitments called Polynomial Evaluation Commitments (). In the original polynomial commitment definition [29], the committer must have chosen a polynomial before calling the Commit procedure. To adapt these for use in MPC, our extended PEC definition supports an alternative, distributed way to create the polynomial commitments: Each party starts with a commitment to just an evaluation point on the polynomial. Next, the evaluation commitments are combined and interpolated to form the overall polynomial commitment. The procedure for generating evaluation proofs is similarly adapted. In our definition below, the changes to plain polynomial commitments (in Section 2.4) are highlighted . Briefly, these are 1) outputs additional commitment evaluation keys and 2) the operation is split into and . More formally, our polynomial evaluation commitment scheme over a field is defined by the following set of algorithms = . We index parties by and the evaluation of the polynomial by , denotes the th evaluation by the th party.

  • [leftmargin=10pt]

  • On input a security parameter , and a maximum degree bound , number of parties , samples some trapdoor and outputs public parameters .

  • Given input evaluation committer key , univariate polynomial evaluations at evaluation point () over a field , outputs commitment to the evaluations using randomness .

  • : On input and commitment to evaluations at , outputs a commitment to the interpolated polynomial corresponding the evaluations of the committed in .

  • Same as the as discussed in Section 2.4 where is the polynomial interpolated by the evaluations . Note that the where must be the same as the one used in for point at index .

  • : Same as the as discussed in Section 2.4.

Additionally, a must satisfy the following properties.

  • [leftmargin=10pt]

  • Perfect Completeness: Consider an adversary which chooses evaluations at evaluation points randomness and query point . Let denote the commitments to the evaluations , the interpolated polynomial at and the Vandermonde matrix at evaluation points respectively. We say that is complete if the evaluation proofs created by for at are correctly verified by with respect to the interpolated commitment that is generated by . More formally, we say that PEC

    is perfect complete if the following probability is 1(

    denotes logic implication).

  • [leftmargin=10pt]

  • Extractable: First, consider an adversary that chooses points and their evaluations . Next, consider an which upon input setup material () and evaluation commitments (for and ) chooses commitments to evaluations at evaluation points . Let denote the interpolated commitment from . Finally, consider an adversary which upon input state from outputs a claimed evaluation at query point with a proof . We say that is extractable if evaluations of the polynomial can be extracted from an adversary () generating a valid proof. More formally, is extractable if for every size bound , every efficient adversary there exists an efficient extractor such that for every the following probability is :

Note that for brevity in the definition, we represent as

  • [leftmargin=10pt]

  • Zero knowledge: We say that is zero knowledge if the adversary cannot distinguish whether it is interacting with the honest prover or a simulator with trapdoors. More formally, there exists a polynomial-time simulator such that, for every maximum degree bound , and efficient adversary , the following distributions are indistinguishable:

Real World:

Ideal World:

4.1 PEC Constructions

We discuss three constructions for schemes. The first, based on Pedersen commitments, is a straightforward approach that involves a commitment to each coefficient of the polynomial. Naturally, this results in commitments and evaluation proofs that are linear in the degree of the polynomials. Even still, when used to instantiate auditable MPC in Section 5, this results in a proof and verification time that is circuit-succinct (i.e., independent of the circuit size ). We defer the details of this scheme to the Appendix.

Towards constructing an auditable MPC that is additionally statement-succinct, our second approach is to adapt an efficient polynomial commitment scheme such as KZG [29]. However, this turns out to be non-trivial. Our first attempt was to simply transport the KZG Commit routine “into the exponent.” Briefly, (and ignoring zero-knowledge to illustrate the problem even in the simple case) this involves committing to the evaluation with a group element . However, we then have now way to obtain , the desired KZG polynomial commitment form. We can use the CRS to compute interpolation factors for Lagrange polynomials , but still we cannot combine these with to get without breaking the Computational Diffie Hellman assumption in our group (which KZG relies on). In particular, the CRS does not allow us to compute outside the exponent. To solve this problem, our idea is to have each evaluation commitment take the form , i.e., to have each party precompute their Lagrange polynomials when committing to their evaluation points. We also need to ensure that corrupt parties cannot perturb the evaluation points committed by honest parties; we address this by creating separate CRS elements to be used by each party, and proving that each evaluation lies in the span its their assigned CRS elements. Finally our construction incorporates hiding polynomials to maintain zero-knowledge. Our succinct PEC construction, , is defined as follows:

  • [leftmargin=10pt]

  • Sample as follows

  • : With input , points with evaluations , compute computing at atmost degree by interpolating at points . Compute . This computes a shifted polynomial commitment using CRS . Similarly, compute . Return

  • : Parse Check the knowledge component: . If check fails, abort, otherwise return

  • : Same as KZG polycommit open operation as described in Section 2.4

  • : Same as KZG polycommit operation as described in Section 2.4.

Theorem 4.1

If satisfies the SDH assumption(Appendix C.1), then the above construction for is a scheme (definition 4)

In Appendix H.2, we provide a proof for the theorem. In Appendix F we show a third scheme that offers concrete performance improvements based on Lipmaa’s commitments [34].

5 Our Auditable MPC Construction

5.1 Adaptive Preprocessing arguments with universal SRS

We first give a formal security definition for Adaptive Marlin, our main construction. Although our final goal is a non-interactive protocol, we follow Chiesa et al. and give an interactive definition and remove interaction with Fiat-Shamir at the end [22]. We use angle brackets to denote the output of when interacting with .

We extend the indexed relations defined in Section 2.5 to the following indexed commitment relations. Let be a extractable trapdoor commitment scheme as shown in section C.3. Given indexed relation and a commitment key

Informally, an adaptive preprocessing argument (also refered as adaptive SNARK) for indexed relation is a preprocessing argument for the relation . We next give the formal definitions for adaptive preprocessing arguments with universal SRS.

Let be an extractable commitment scheme. Further, let . We define an adaptive SNARKs (referred as preprocessing arguments with universal SRS in Marlin) for extractable trapdoor commitment scheme and relation as a tuple of four algorithms :

  • [leftmargin=9pt]

  • : generator is a ppt which when given a size bound , outputs an that supports indices of size up to and a trapdoor .

  • : The indexer is a deterministic algorithm that with oracle access to takes in a circuit index outputs proving key and verification key specific to the index .

  • : The prover is a ppt which on input prover key , committer keys , statement , commitment randomness and witness outputs a proof .

  • : Verifier is a ppt which upon input index verification key , receiver key , polynomial evaluation commitments and a proof outputs either or .

Furthermore, we want adaptive preprocessing agruments to satisfy the following properties:

  • [leftmargin=9pt]

  • Perfect Completeness: We say that our adaptive preprocessing argument is complete if all adversaries choosing the a tuple , the interaction between honest prover is always able to convince the honest verifier.

  • [leftmargin=9pt]

  • Extractable: We say that our adaptive preprocessing argument is extractable if for every size bound and efficient adversary = there exists an efficient extractor such that the following probability is .

  • [leftmargin=9pt]

  • Zero Knowledge: We say that our adaptive preprocessing argument is zero knowledge if the adversary is not able to distinguish whether it is interacting with a honest prover or a simulator. More formally, ARG is zero knowledge if for every size bound there exists a simulator such that for every efficient adversary = the probabilities shown below are equal

[t]0.5 Adaptive Zk-Snark based on Marlin
Let = and . Let () be a augmented pre-processing argument constructed from Marlin for the relation . Let be same as with the difference that it knows public commitment along with the secret . Similarly, let be the same as with the difference that only knows instead of .

  • : If , abort. Return from

  • : Let where . Return .

  • : Sample . ; = . call . Let , be second verifier challenge in Marlin execution. . return .

  • : Parse ; Invoke the Marlin verifier routine by replacing with constant function as . Invoke .

    return and .

Fig. 2: Adaptive preprocessing arguments using Marlin

5.2 Construction of Adaptive Preprocessing arguments with Universal SRS

Our construction closely follows Marlin’s except for two main modifications to the underlying Algebraic Holographic Proof (AHP). The full dsecription of Marlin is in Appendix  G, so here we only highlight the differences. In the Marlin prover algorithm, the verifier is assumed to have the entire statement and hence it can construct for itself , the polynomial encoding of the statement (querying this polynomial at random challenge points is roughly what makes the scheme ”holographic). In our setting, the verifier does not have , only a commitment to it , so the prover must additionally supply . We must check that the prover supplied matches the commitment , which can be addressed using . Additionally, statement

must be kept zero knowledge. We can achieve this the same way as Marlin keeps the witness zero knowledge, namely by padding the degree of

by a margin of so that learning challenge points of reveals nothing about . As with Marlin, it suffices to set , but we stick to for consistency of notation.

In more detail, we consider an augmented relation , , : , , , , = , , and by padding with dummy constraint. In more detail, let such that such that , compute is a vector in such that . This is done by padding matrices with dummy constraint () on free variable to obtain . In simpler words, we add a free statement variable to the indexed constraint system.

Finally, we use the compiler from Marlin to compile the above modified AHP and polynomial commitment scheme from Marlin(different from PEC) to result in pre-processing arguments that are adaptive.

Our construction for an adaptive Preprocessing arguments ARG = with universal SRS for extractable trapdoor commitment scheme and relation is shown in  2.

We state our construction with a generic , it is possible to instantiate with any of , or . Our routine Generator uses the same from the scheme whereas our Indexer operates on the . Our prover algorithm first samples additional element element to compute the augmented statement . First, the prover computes to compute the evaluation commitment at point with index keeping the randomness . It then runs the modified Marlin prover to obtain a proof . Let be the low degree extension(LDE) of