Multi-theorem (Malicious) Designated-Verifier NIZK for QMA

07/25/2020
by   Omri Shmueli, et al.
Tel Aviv University
0

We present the first non-interactive zero-knowledge argument system for QMA with multi-theorem security. Our protocol setup constitutes an additional improvement and is constructed in the malicious designated-verifier (MDV-NIZK) model (Quach, Rothblum, and Wichs, EUROCRYPT 2019), where the setup consists of a trusted part that includes only a common uniformly random string and an untrusted part of classical public and secret verification keys, which even if sampled maliciously by the verifier, the zero knowledge property still holds. The security of our protocol is established under the Learning with Errors Assumption. Our main technical contribution is showing a general transformation that compiles any sigma protocol into a reusable MDV-NIZK protocol, using NIZK for NP. Our technique is classical but works for quantum protocols and allows the construction of a reusable MDV-NIZK for QMA.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

11/19/2019

Non-interactive classical verification of quantum computation

In a recent breakthrough, Mahadev constructed an interactive protocol th...
11/18/2019

Secure Quantum Extraction Protocols

Knowledge extraction, typically studied in the classical setting, is at ...
11/18/2019

Non-interactive zero-knowledge arguments for QMA, with preprocessing

A non-interactive zero-knowledge (NIZK) proof system for a language L∈NP...
03/24/2020

Information-theoretically-sound non-interactive classical verification of quantum computing with trusted center

The posthoc verification protocol [J. F. Fitzsimons, M. Hajdušek, and T....
11/23/2020

On The Round Complexity of Two-Party Quantum Computation

We investigate the round complexity of maliciously-secure two-party quan...
02/18/2021

Classically Verifiable (Dual-Mode) NIZK for QMA with Preprocessing

We propose three constructions of classically verifiable non-interactive...
03/05/2018

Spatial Isolation Implies Zero Knowledge Even in a Quantum World

Zero knowledge plays a central role in cryptography and complexity. The ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Zero-knowledge protocols allow to prove statements without revealing anything but the mere fact that they are true. Since their introduction by Goldwasser, Micali, and Rackoff [GMR89] they have had a profound impact on modern cryptography and theoretical computer science at large. While standard zero-knowledge protocols are interactive, Blum, Feldman, and Micali [BFM19] introduced the concept of a non-interactive zero-knowledge (NIZK) protocol, which consists of a single message sent by the prover to the verifier. NIZK protocols cannot exist in the plain model (i.e. a language with such a NIZK protocol can be decided by an efficient algorithm) but can be realized with a pre-computed setup. The point of the setup is that it can be computed instance-independently and usually, the setup is executed by a trusted third party that generates and publishes a string of bits and sometimes trapdoors are handed to the prover or verifier (or both).

Although existing zero-knowledge protocols for NP cover an array of diverse tasks and in particular, under standard computational assumptions it is known how to construct NIZK protocols for NP [CCH19, PS19, BKM20], far less is known about the class QMA, the quantum generalization of NP. This knowledge gap between NP and QMA, which is present in both interactive and non-interactive zero-knowledge protocols, stems from the fact that many of the techniques that work for NP and more precisely, classical information-processing, usually fail when are needed for the processing of quantum information.

The first expression of the gap between classical and quantum NIZK protocols is that of setup requirements, that is, how much trust and resources the setup needs. For example, the standard setup in NIZK is called the common reference string (CRS) model, where the trusted party samples a classical string from some specified distribution and publishes it (no trapdoors are handed to either prover or verifier in this model). If the reference string is simply uniformly random then the setup is in the common random string model, which is considered to require minimal trust in the NIZK setting, as the trusted party holds no trapdoors whatsoever. NIZK arguments for NP are known to exist in the common random string model under LWE [CCH19, PS19]. In current QMA constructions the setup is comprised at least of a common reference string sampled by the trusted party, and an additional public and secret verification keys where is published along with the CRS and is kept by the verifier, such that either:

  • is a quantum state that needs to stay coherent while waiting for the proof by the prover, or

  • The pair can be sampled only by the trusted party and not the verifier.

Aside from the above, perhaps the most basic missing part between NIZK protocols for NP and QMA is the existence (or inexistence) of multi-theorem security. Multi-theorem security considers the reusability of the setup, that is, once the setup is computed, any prover can send a proof by a single message repeatedly for many different statements and there is no need to re-compute the setup for every new proof sent and in relation to the above QMA setups: once the CRS and public verification key are published, they are reusable. However, although multi-theorem security provides the main efficiency advantage to a NIZK protocol over an interactive protocol, we currently don’t have non-interactive zero-knowledge protocols for QMA with reusable setups.

Given the gap of knowledge in NIZK techniques between NP and QMA, improving the power of NIZKs for QMA seem as a natural cryptographic goal which we explore in this work.

1.1 Results

Under the Learning with Errors (LWE) assumption [Reg09] we resolve the above open question. Specifically, we construct a NIZK argument for QMA with multi-theorem security and reduce setup requirements by proving security in the following model:

  1. The trusted party samples only a common random string .

  2. Given , any verifier can sample a pair of classical public and secret verification keys , in particular it is possible that the published is maliciously-generated.

Given and , any prover can repeatedly give a non-interactive zero-knowledge proof by a single quantum message . The above setup model is introduced by Quach, Rothblum, and Wichs in [QRW19] as the malicious designated-verifier model (MDV-NIZK), and has the same minimal trust requirements as the common random string model (but is privately verifiable).

Theorem 1.1 (informal).

Assuming that LWE is hard for polynomial-time quantum algorithms, there exists a reusable, non-interactive computational zero-knowledge argument system for QMA in the malicious designated-verifier model.

Main Technical Contribution: General Sigma Protocol MDV-NIZK Compilation.

Technically, we deviate completely from previous NIZK constructions for QMA and our main contribution is showing how given a NIZK for NP it is possible to compile any general sigma protocol into a reusable MDV-NIZK protocol. Our technique is simple and purely classical but works also for quantum zero-knowledge protocols and in particular can be used for showing a reusable MDV-NIZK for QMA. Further details are given in the technical overview below.

1.2 Technical Overview

We next describe our construction of a multi-theorem-secure MDV-NIZK protocol for QMA. For a discussion about the possibility of constructing a NIZK protocol for QMA in the CRS model see subsection 1.3.1, and for an overview of NIZK models and previous work on NIZK for QMA see subsection 1.3.2.

We deviate from previous approaches of NIZK for QMA and take a different (and very natural) approach: Find a ”classical anchor” in quantum zero-knowledge protocols and then solve the problem by having a NIZK for NP. As such we currently restrict our attention to an even simpler, purely-classical question: Given any sigma protocol , generically compile it into a multi-theorem-secure MDV-NIZK while assuming minimal properties of the protocol111In particular, we do not assume that the message is classical.. We will start with considering classical sigma protocols and later see what changes should take place in order for the technique to work for quantum protocols.

From a Sigma Protocol to a Single-Theorem-Secure MDV-NIZK.

A sigma protocol is a 3-message public-coin proof system (with some mild zero knowledge properties), where the 3 messages are denoted by , and (i.e. is a random string and is called ”the challenge string”). Our first step is to construct a MDV-NIZK protocol with only single-theorem security out of a sigma protocol and is very simple.

In a sigma protocol, since the verifier’s message is a random string it is independent of any other information, additionally, our second need from it is that it stays hidden (until after the prover sends its first message ). The verifier can compute its public verification key, which is computed instance-independently, as a function of : The public verification key is an FHE-encrypted random challenge and the secret verification key is the FHE decryption key and the challenge string,

Given the public verification key , the 1-message proof procedure for goes as follows:

  • computes the first sigma protocol message , where .

  • computes the last protocol message under the encryption, that is, performs the homomorphic evaluation .

  • As the proof, sends out in the open and under the encryption, that is, the proof is .

In order for the proof to stay zero-knowledge, the homomorphic evaluation needs to be circuit-private. The verification algorithm is straightforward: Given , an instance and a proof , the verifier decrypts to get , and accepts iff the sigma protocol verifier accepts .

Is the Above Protocol Multi-Theorem-Secure?

While it is intuitively clear that the described construction is secure for a single use of the setup (that is, the above should, with some modifications, yield a single-theorem-secure MDV-NIZK) it is provably not multi-theorem-secure by a standard attack. Sigma protocols are usually parallel repetitions of 3-message zero-knowledge protocols, for example, consider the sigma protocol which is the parallel repetition of the zero-knowledge protocol for Graph Hamiltonicity [Blu86], which is as follows: Given a Hamiltonian cycle in a graph , the prover samples a random permutation of the vertices and commits to the permuted graph 222That is, the prover commits to all of the cells in the adjacency matrix that represents the graph .. The verifier then sends a random bit , and the prover answers accordingly:

  • If it is considered as a validity check, and the prover opens all commitments and sends . The verifier accepts if indeed the committed graph is .

  • If it is considered as the cycle check, and the prover opens commitments only for the subgraph . The verifier accepts if the opening shows a Hamiltonian cycle.

If the sigma protocol used in the above MDV-NIZK construction is the parallel repetition of the zero-knowledge protocol for Hamiltonicity333We take the Hamiltonicity protocol only as a concrete easy example and in fact any other sigma protocol can take the role of this protocol in our context of attacking the soundness., then there is a polynomial-time malicious prover that given multiple access to the verifier’s verdict function using the same public/secret verification key pair, can decode the encrypted challenge string (which is polynomially-many random bits, each bit is for the -th parallel repetition of the zero-knowledge protocol) and consequently break the soundness.

takes a Hamiltonian graph and a Hamiltonian cycle in it, and will decode the entire bit-by-bit: To decode , will honestly execute the zero-knowledge protocol prover’s algorithm for all indices but index (that is, for all , it will honestly compute and under the encryption, the opening of either the entire graph and the permutation of just the cycle ), for which it is going to operate as follows. will guess that and send a commitment to a permutation of the graph out in the open and under the encryption act as if regardless of the actual value of . By the verifier’s acceptance or rejection it will know whether the bit was or . After decoding the prover can now use this information to ”prove” that any graph is Hamiltonian.

From Single-Theorem to Multi-Theorem Security.

In the above attack the prover heavily relied on a specific operation: It uses a yes-instance (in the above case, a Hamiltonian graph ), in order to decode the random challenge and then goes on to use the knowledge of to give a false proof for a no-instance (again, in the above, a non Hamiltonian graph ).

Crucially, does not know how to decode when the graph is not Hamiltonian. More specifically, in the above we decode bit-by-bit rather than all at once, and this ability comes from the fact that is Hamiltonian and the zero-knowledge protocol is complete, thus can be sure that if it honestly executes the zero-knowledge protocol for all indices but , the only index that can make the proof get rejected is . In this isolation, checking whether the challenge bit is or becomes trivial. However, if the graph is not Hamiltonian then the prover cannot know which index made the proof get rejected because all

indices are prone to rejection. Formally, by the soundness of the sigma protocol, we know that the answer from the verdict function of the verifier in this case will always be a rejection for any polynomial (or even sub-exponential) number of queries, with overwhelming probability. This means in particular that the prover cannot decode anything through the oracle access to the verdict function.

Our fix to the first protocol is based on the above observation: If we could make the random challenge change with the instance at hand it seems that the decoding attack is neutralized, because even if the prover decodes the challenge for a Hamiltonian graph , it doesn’t have information about the challenge of some non Hamiltonian . Since the instance is in particular a classical string we can make the challenge change with the instance: The public verification key will not be an encrypted challenge but instead will be a secret key of a pseudorandom function . The prover will compute out in the open as before but the homomorphic evaluation changes: under the encryption, will compute the challenge string as the PRF’s output on the instance , and then compute for the challenge .

Extraction by Non-interactive Zero Knowledge for NP.

Up to this point we only came close to constructing a provably-secure MDV-NIZK. Indeed, we didn’t even use any NIZK tools yet for NP, and in order to prove the security of our construction we need knowledge extraction from both the prover and verifier.

To prove soundness, our thought process is roughly the following: We know that the prover computes obliviously under the FHE, more precisely, it homomorphically evaluates the circuit that computes and then given computes . The part of the circuit that computes from is the ”non-trivial” part of the circuit and is determined by a secret string (which is the information that the honest sigma protocol prover uses in order to compute , this information is the randomness of the prover and possibly the witness). If we could extract from a prover (e.g. by the prover giving a proof of knowledge on the non-trivial part of the circuit ) that successfully cheats in the NIZK protocol then we could get a successfully cheating prover for the sigma protocol and thus prove security. To see this, note that by the hiding of the FHE and by the pseudorandomness of the PRF, even if as the public verification key we send an encryption of instead of an encryption of the PRF secret key, the string still needs to yield a circuit that does well in generating a satisfying for a now-truly-random challenge .

On the zero knowledge side we also need extraction, and we start with recalling a basic property of a sigma protocol: if we know the challenge string before sending the first message then we can simulate a view that is indistinguishable from the real interaction with the honest prover. This means that the information we want to extract from the malicious verifier is the secret PRF key that in particular holds the information for obtaining .

We solve both extraction tasks by a combination of a two-sided NP NIZK and a public-key encryption scheme with pseudorandom public keys. Given the existence of a PKE scheme with pseudorandom public keys of length we take the common random string of our protocol to be (1) the common random string of an NP NIZK protocol which we denote with , concatenated with (2) a random string of length which we denote with (for extraction key).

We will let each of the parties encrypt, using , the secrets that we want to extract and then use the NIZK to prove consistency between the content of the PKE encryption and the protocol computations. More precisely, as part of its 1-message proof, the prover will give a proof that the string encrypted using the PKE yields the (canonical) circuit that it used for the (circuit-private) homomorphic evaluation that generated , and the verifier, as part of its public verification key, will give a proof that the PRF key that is encrypted using the PKE is the same key encrypted with the FHE. Note that the information that the parties encrypt using a random string instead of a real PKE key stays secure due to the fact that a real key is indistinguishable from a random string, and thus an adversary that manages to break the PKE when it uses a random string as the public key can break the pseudorandomness property of the public keys.

When wanting to extract information (either in the soundness reduction or in the zero-knowledge simulation), we will sample using the PKE key-generation algorithm , and since the public keys are pseudorandom the change in key distribution won’t be felt by either of the parties. At that point the parties encrypt their secrets and prove they do so using the NIZK, and the extractor can just use the PKE decryption to obtain the secrets.

Compiling Quantum Protocols.

Our technique so far is entirely classical and compiles classical sigma protocols. We now ask whether it works to compile quantum sigma protocols. This can be answered in turn by answering the following question: what properties of the sigma protocol exactly did we use in order for the MDV-NIZK protocol to work?

It can be verified that even if we don’t assume nothing on the sigma protocol that we compile, every action in the MDV-NIZK protocol except the homomorphic evaluation of the circuit can stay exactly the same. Regarding the homomorphic evaluation, the issue that we have is the following: In order to still be able to extract the information of the circuit from the prover, the computation that takes and outputs needs to be a classical circuit. This is not necessarily the case in a quantum protocol. For example, in the quantum zero-knowledge protocol for QMA of [BJSW16] (which is also the basis for the quantum NIZK protocol of [CVZ19]), in order to generate given , first a quantum Clifford operation that is chosen with respect to needs to be executed on , followed by a measurement. Then, the prover proves in ZK that the classical string obtained by the measurement satisfies some properties444in that protocol it is also needed that the verifier itself makes the Clifford operation and measurement, which makes the protocol more challenging to use for a NIZK protocol.. Luckily, we identify a different quantum protocol that in fact does satisfy the property that can be computed by an entirely classical circuit.

We consider the Consistency of Local Density Matrices (CLDM) problem [Liu06], which is a QMA problem with some special properties. In [BG19] Broadbent and Grilo show that CLDM is QMA-complete and how to construct a very simple quantum zero-knowledge protocol for it. The [BG19] zero-knowledge protocol for CLDM is as follows: Given a quantum witness

, the protocol starts with the prover sending a quantum one-time pad encryption of

as the message . More precisely, for a length- witness it samples classical random pads , applies

and then sends as the transformed quantum state and classical commitments to the QOTP keys . For a random challenge , the prover response is an opening to part of the state. We find the CLDM problem and specifically the zero-knowledge protocol for it especially attractive for our purposes as is only a function of the randomness of the prover and the challenge , which in particular means that the circuit can stay classical in our setting.

Finally, by using the sigma protocol yielded by the parallel repetition of the zero-knowledge protocol from [BG19] we obtain a clean and simple non-interactive computational zero-knowledge argument system for the class QMA in the malicious designated-verifier model:

  1. Common Random String: .

  2. Public and Secret Verification Keys: ,

For any prover that wishes to give a proof for an instance , it executes the following:

  • Proof: If is valid, computes and sends

1.3 Related Work

In this section we discuss the main challenges in the construction of non-interactive zero-knowledge protocols for QMA (specifically in the CRS model) and the previous works on QMA NIZKs.

1.3.1 Can we Build a NIZK protocol for QMA in the CRS model?

In short, the answer to the above question is that we don’t know, and this section does not aim to answer it. This section is intended to give some evidence to why constructing a NIZK for QMA in the CRS model seem to require a different set of techniques from what we currently have for NP. In what follows we will start with briefly recalling how NIZKs for NP are constructed and then understand why current approaches fail in the setting of quantum proofs.

NP, Fiat-Shamir and Correlation Intractability.

In order to construct a non-interactive zero-knowledge protocol for NP under standard assumptions, the construction starts with a sigma protocol . To make the protocol non-interactive, the Fiat-Shamir transform is applied: By assuming public oracle access to a random function , the prover applies it to and treat its (random-string) output as the challenge string . It then computes and sends all of this information to the verifier, who makes sure that was rightfully generated , and that the sigma protocol verifier accepts. Since we don’t know how to construct a cryptographic primitive that acts as a publicly-computable random function, the above protocol is secure only in the random oracle model, that is, only if we directly assume public access to such random function .

In order to prove the security of the NIZK protocol in the standard model (with access to a common reference string rather than a random oracle), the final part of the construction involves swapping the random function with a new, special hash function - this general technique of swapping with a special hash function is usually called the Correlation Intractability (CI) paradigm [CGH04]. The properties of the hash function or the meaning of correlation intractability are less relevant to this overview, but it is suffices to say that under the LWE assumption it is known how to construct a hash function that can be swapped with in the FS transform and where the protocol can be proven secure [CCH19, PS19].

Can we use Known Classical NIZK Techniques for Quantum Protocols?

There are two known routes for getting a quantum-secure NIZK for NP in the CRS model, the first is through the FS transform and CI (which also uses only standard assumptions, described above) and the second is through the hidden bits model and indistinguishability obfuscation. It is natural to ask whether we can use these techniques for QMA (the question of whether the FS transform can be used for quantum protocols was asked as one of the open questions in section 1.4 of [BG19]).

We first review the ability to use the FS transform (and in particular correlation intractability) for QMA and explain why there is an issue with the no-cloning theorem. In the quantum setting, sigma protocols [BG19, BJSW16] are quite the same but with the main difference that the first message is quantum (and of course, the prover takes as input a quantum witness rather than classical). Recall that when we use the FS transform on a sigma protocol in order to generate a NIZK, for the protocol to be complete, when the parties act honestly then the verifier needs to verify that the random function yields the challenge, that is . This means that now needs to be a quantum transformation such that for and an honestly generated , is always the same classical string (with overwhelming probability). Now, denote by the classical string s.t. , and we have a generating circuit for the quantum witness: , where the inverse versions of and are purified. This seems to violate the no-cloning theorem in the following manner: the prover gets a copy of the witness and can generate a generating circuit for the witness state, this circuit can be used to generate arbitrarily many copies of the state. Finally, because we can always consider a trivial language with a dummy witness, and take the quantum witness to be some unclonable state (for example, a pseudorandom quantum state) we get a contradiction to the no-cloning theorem.

Even if we aim to construct a NIZK using the FS transform for QCMA, the subclass of QMA where the verification algorithm is still quantum but the witness is classical, the problem is not seemed to be solved. The reason, is that we don’t know how to construct sigma protocols for QCMA where the first message is classical, and the same contradiction to the no-cloning theorem holds.

The second known route of obtaining a quantum-secure NIZK protocol for NP in the CRS model is through the hidden bits model [FLS99] which is implementable by sub-exponentially-secure indistinguishability obfuscation [BPW16]. In the hidden bits model, intuitively (and roughly), the trusted party samples as the common reference string a commitment to a string sampled from some distribution (where by using a trapdoor permutation, the prover can open the commitments efficiently), and the prover proves that the instance at hand satisfies some property related to the string underlying the commitments. Even if we are willing to assume the very strong cryptographic assumptions which are needed for the realization of this protocol (i.e. sub-exponentially-secure post-quantum indistinguishability obfuscation), it is currently unknown how to use the hidden bits model to instantiate non-interactive zero-knowledge quantum protocols.

1.3.2 Relaxations of the CRS Model and Previous Work

The constructions of NIZKs for NP discussed in subsection 1.3.1 are implicitly in the CRS model, where the setup consists of a string that is sampled and published by the trusted party, in particular, nor the prover or verifier hold any trapdoors over the setup. Sometimes when it is unknown how to build a NIZK in the CRS model (or unknown how to minimize the assumptions for building one) we turn to relaxations of the CRS model. For example, in the designated-verifier model (DV-NIZK) [PV06] the trusted party samples, along with the CRS, a pair of public and secret verification keys , publishes along with the CRS and hands only to the verifier. Another example is the designated prover model (DP-NIZK) [KW19], which is analogous to the DV-NIZK model, only that the prover is the one who gets a secret, now-proof key.

It is a well known fact in the design of NIZKs that when the verifier holds a secret verification key (e.g. in the DV-NIZK model) then multi-theorem zero knowledge can be achieved generically by the compiler of [FLS99], but multi-theorem soundness becomes non-trivial. For example, it is possible (and is sometimes provably the case) that the prover can decode the verifier’s secret key by having access multiple times to the verifier’s verdict function, consequently breaking the soundness of the protocol. Indeed, one example is that until the works of [QRW19, LQR19], based on [PV06] it was only known how to get single-theorem-secure DV-NIZK for NP, and another example is that this is the current situation with QMA constructions of NIZK protocols.

The QMA NIZK protocol of Broadbent and Grilo [BG19] is in the secret parameters model (i.e. the protocol is both designated-prover and designated-verifier and both parties get secret keys from the trusted party) but is a proof system and has statistical soundness rather than the computational soundness we achieve. The protocol of Coladangelo, Vidick and Zhang [CVZ19] is in a model that is somewhat between the common reference string model and the DV-NIZK model, where the trusted party samples a common reference string and the verifier itself samples a pair where is a quantum state. Both of the abovementioned protocols are not reusable.

Outside of the standard model, an additional construction by Alagic, Childs, Grilo and Hung [ACGH19] yields a QMA NIZK protocol in the quantum random oracle model (with additional setup in the secret parameters model) which is both reusable and classical-verifier.

There are two main issues with letting the trusted party sample secret keys for any of the parties: First, the trust requirements of the setup now increase as the party receiving the secret key should assume that the trusted party handles its secret information securely. The second issue is that of centralization of computational resources: for example, in the DV-NIZK model, the trusted party is now responsible for sampling a fresh pair for every new verifier that wishes to use the protocol, which is very different from the CRS setting where it samples a string and from that point on can terminate.

The malicious designated-verifier (MDV-NIZK) model [QRW19, LQR19] seeks to solve the above two problems, which is also the model of our protocol. In the MDV-NIZK model the trusted party only samples a common random string, and then, any verifier wishing to use the protocol can sample by itself a pair of classical keys and publish . The protocol then stays secure even if the public key is maliciously-generated.

Acknowledgments

We deeply thank Nir Bitansky and Zvika Brakerski for helpful discussions during the preparation of this work.

2 Preliminaries

We rely on standard notions of classical Turing machines and Boolean circuits:

  • A PPT algorithm is a probabilistic polynomial-time Turing machine.

  • Let be a PPT and let

    denote the random variable which is the output of

    . Whenver the entropy of the output of is non-zero, we denote the random experiment of sampling with . If the entropy of the output of is zero (i.e. is deterministic), we denote .

  • We sometimes think about PPT algorithms as polynomial-size uniform families of circuits, these are equivalent models. A polynomial-size circuit family is a sequence of circuits , such that each circuit is of polynomial size . We say that the family is uniform if there exists a deterministic polynomial-time algorithm that on input outputs .

  • For a PPT algorithm , we denote by the output of on input and random coins . For such an algorithm and any input , we write to denote the fact that is in the support of .

We follow standard notions from quantum computation.

  • A QPT algorithm is a quantum polynomial-time Turing machine.

  • We sometimes think about QPT algorithms as polynomial-size uniform families of quantum circuits, these are equivalent models. A polynomial-size quantum circuit family is a sequence of quantum circuits , such that each circuit is of polynomial size . We say that the family is uniform if there exists a deterministic polynomial-time algorithm that on input outputs .

  • An interactive algorithm , in a two-party setting, has input divided into two registers and output divided into two registers. For the input, one register is for an input message from the other party, and a second register is an auxiliary input that acts as an inner state of the party. For the output, one register is for a message to be sent to the other party, and another register is again for auxiliary output that acts again as an inner state. For a quantum interactive algorithm , both input and output registers are quantum.

The Adversarial Model.

Throughout, efficient adversaries are modeled as quantum circuits with non-uniform quantum advice (i.e. quantum auxiliary input). Formally, a polynomial-size adversary , consists of a polynomial-size non-uniform sequence of quantum circuits , and a sequence of polynomial-size mixed quantum states .

For an interactive quantum adversary in a classical protocol, it can be assumed without loss of generality that its output message register is always measured in the computational basis at the end of computation. This assumption is indeed without the loss of generality, because whenever a quantum state is sent through a classical channel then qubits decohere and are effectively measured in the computational basis.

Indistinguishability in the Quantum Setting.
  • Let be a function.

    • is negligible if for every constant there exists such that for all , .

    • is noticeable if there exists such that for every , .

    • is overwhelming if it is of the form , for a negligible function .

  • We may consider random variables over bit strings or over quantum states. This will be clear from the context.

  • For two random variables and supported on quantum states, quantum distinguisher circuit with, quantum auxiliary input , and , we write if

  • Two ensembles of random variables , over the same set of indices are said to be computationally indistinguishable, denoted by , if for every polynomial-size quantum distinguisher there exists a negligible function such that for all ,

  • The trace distance between two distributions supported over quantum states, denoted , is a generalization of statistical distance to the quantum setting and represents the maximal distinguishing advantage between two distributions supported over quantum states, by unbounded quantum algorithms. We thus say that ensembles , , supported over quantum states, are statistically indistinguishable (and write ), if there exists a negligible function such that for all ,

In what follows, we introduce the cryptographic tools used in this work. By default, all algorithms are classical and efficient, and security holds against polynomial-size non-uniform quantum adversaries with quantum advice.

2.1 Cryptographic Tools

2.1.1 Interactive Proofs and Sigma Protocols

We define interactive proof systems and then proceed to describe sigma protocols, which are a special case of interactive proof systems. In what follows, we denote by a protocol between two parties and . For common input , we denote by the output of in the protocol. For honest verifiers, this output will be a single bit indicating acceptance or rejection of the proof. Malicious quantum verifiers may have arbitrary quantum output.

Definition 2.1 (Quantum Proof Systems for QMA).

Let be a quantum protocol with an honest QPT prover and an honest QPT verifier for a problem , satisfying:

  1. Statistical Completeness: There is a polynomial and a negligible function s.t. for any , , 555For a problem in QMA, for an instance , the set is the (possily infinite) set of quantum witnesses that make the BQP verification machine accept with some overwhelming probability .,

  2. Statistical Soundness: There exists a negligible function , such that for any (unbounded) prover , any security parameter , and any ,

We use the abstraction of Sigma Protocols, which are public-coin three-message proof systems with a weak zero-knowledge quarantee. We define quantum Sigma Protocols for gap problems in QMA.

Definition 2.2 (Quantum Sigma Protocol for QMA).

A quantum sigma protocol for is a quantum proof system (as in Definition 2.1) with 3 messages and the following syntax.

  • Given copies of the quantum witness and classical randomness , the first prover message consists of a quantum message generated by a quantum unitary computation .

  • The verifier simply outputs a string of random bits.

  • Given the verifier’s and the randomness , the prover outputs a response by a classical computation .

The protocol satisfies the following.

Special Zero-Knowledge:

There exists a QPT simulator such that,

where , , , and is the amount of randomness needed for the first prover message.

Instantiations.

Quantum sigma protocols follow from the parallel repetition of the 3-message quantum zero-knowledge protocols of [BG19] for QMA.

2.1.2 Leveled Fully-Homomorphic Encryption with Circuit Privacy

We define a leveled fully-homomorphic encryption scheme with circuit privacy, that is, for an encryption and a circuit , a -homomorphically-evaluated ciphertext reveals nothing on but .

Definition 2.3 (Circuit-Private Fully-Homomorphic Encryption).

A circuit-private, leveled fully-homomoprhic encryption scheme has the following syntax:

  • a probabilistic algorithm that takes a security parameter and a circuit size bound and outputs a secret key .

  • a probabilistic algorithm that given the secret key, takes a string and outputs a ciphertext .

  • a probabilistic algorithm that takes a (classical) circuit and a ciphertext and outputs an evaluated ciphertext .

  • a deterministic algorithm that takes a ciphertext and outputs a string .

The scheme satisfies the following.

  • Perfect Correctness: For any polynomial , for any , size- classical circuit and input for ,

  • Input Privacy: For every polynomial (and any polynomial ),

    where and .

  • Statistical Circuit Privacy: There exist unbounded algorithms, probabilistic and deterministic such that:

    • For every , , the extractor outputs .

    • For any polynomial ,

      where , is a -size circuit, and .

The next claim follows directly from the circuit privacy property, and will be used throughout the analysis.

Claim 2.1 (Evaluations of Agreeing Circuits are Statistically Close).

For any polynomial ,

where , , are two -size functionally-equivalent circuits, and .

Instantiations.

Circuit-private leveled FHE schemes are known based on LWE [OPCPC14, BD18].

2.1.3 Pseudorandom-key Public-key Encryption

We define a public-key encryption scheme with pseudorandom public keys.

Definition 2.4 (Pseudorandom-key Public-key Encryption).

A pseudorandom-key public-key encryption scheme has the following syntax:

  • a probabilistic algorithm that takes a security parameter and outputs a pair of public and secret keys .

  • a probabilistic algorithm that given the public key, takes a string and outputs a ciphertext .

  • a deterministic algorithm that given the secret key, takes a ciphertext and outputs a string .

The scheme satisfies the following.

  • Statistical Correctness Against Malicious Encryptors: There is a negligible function such that for any and input , the following perfect correctness holds with probability at least over sampling :

  • Public-key Pseudorandomness: For let be the length of the public key generated by , then,

  • Encryption Security: For every polynomial ,

    where and .

Instantiations.

Pseudorandom-key public-key encryption schemes are known based on LWE [Reg09].

2.1.4 Pseudorandom Function

Definition 2.5 (Pseudorandom Function (PRF)).

A pseudorandom function scheme has the following syntax:

  • a probabilistic algorithm that takes a security parameter and an output size and outputs a secret key .

  • a deterministic algorithm that given the secret key, takes a string and outputs a string .

The scheme satisfies the following property.

  • Pseudorandomness: For every quantum polynomial-size distinguisher and polynomial there is a negligible function such that for all ,

2.1.5 NIZK Argument for NP in the Common Random String Model

We define non-interactive computational zero-knowledge arguments for NP in the common random string model, with adaptive multi-theorem security.

Definition 2.6 (NICZK Argument for NP).

A non-interactive computational zero-knowledge argument system in the common random string model for a language consists of 3 algorithms with the following syntax:

  • A classical algorithm that on input security parameter simply samples a common uniformly random string .

  • A probabilistic algorithm that on input , an instance and a witness , outputs a proof .

  • A deterministic algorithm that on input , an instance and a proof , outputs a bit.

The protocol satisfies the following properties.

  • Perfect Completeness: For any , , ,

  • Adaptive Computational Soundness: For every quantum polynomial-size prover there is a negligible function such that for every security parameter ,

  • Multi-Theorem Adaptive Computational Zero Knowledge: There exists a polynomial-time simulator such that for every quantum polynomial-size distinguisher there is a negligible function such that for every security parameter ,

    where,

    where,

    • In every query that makes to the oracle, it sends a pair where and .

    • is the prover algorithm and acts only on its sampled trapdoor and on .

Instantiations.

Non-interactive computational zero-knowledge arguments for NP in the common random string model with both adaptive soundness and zero knowledge are known based on LWE [CCH19, PS19].

2.1.6 Malicious Designated-Verifier Non-interactive Zero-knowledge for QMA

We define non-interactive zero-knowledge protocols in the malicious designated-verifier model (MDV-NIZK) for QMA, with adaptive (and non-adaptive) multi-theorem security.

Definition 2.7 (MDV-NICZK Argument for QMA).

A non-interactive computational zero-knowledge argument system for in the malicious designated-verifier model for a gap problem consists of 4 algorithms with the following syntax:

  • A classical algorithm that on input security parameter simply samples a common uniformly random string .

  • A classical algorithm that on input samples a pair of public and secret verification keys.

  • A quantum algorithm that on input , the public verification key , an instance and polynomially-many identical copies of a witness ( is some polynomial), outputs a quantum state .

  • A quantum algorithm that on input , secret verification key , an instance and a quantum proof , outputs a bit.

The protocol satisfies the following properties.

  • Statistical Completeness: There is a polynomial and a negligible function s.t. for any , , , , ,

  • Multi-Theorem Adaptive Computational Soundness: For every quantum polynomial-size prover there is a negligible function such that for every security parameter ,

  • Multi-Theorem Adaptive Computational Zero Knowledge: There exists a quantum polynomial-time simulator such that for every quantum polynomial-size distinguisher there is a negligible function such that for every security parameter ,

    where,

    • In every query that makes to the oracle, it sends a triplet where can be arbitrary, and .

    • is the prover algorithm and acts only on its sampled trapdoor and on .

We note that the standard (non-adaptive) soundness guarantees the following:

Definition 2.8 (MDV-NICZK Argument for QMA with Standard Soundness).

A non-interactive computational zero-knowledge argument system in the malicious designated-verifier model for a gap problem has standard non-adaptive soundness if it satisfies the same properties described in definition 2.7, with the only change that instead of satisfying multi-theorem adaptive soundness, it satisfies the following guarantee:

  • Multi-Theorem Computational Soundness: For every quantum polynomial-size prover and where , there is a negligible function such that for every security parameter ,

3 Non-interactive Zero-knowledge Protocol

In this section we describe a non-interactive computational zero-knowledge argument system in the malicious designated-verifier model for an arbitrary , according to Definition 2.7.

Ingredients and notation:
  • A non-interactive zero-knowledge argument for NP in the common random string model.

  • A pseudorandom function .

  • A leveled fully-homomorphic encryption scheme with circuit privacy.

  • A public-key encryption scheme with pseudorandom public keys.

  • A 3-message quantum sigma protocol for QMA.

We describe the protocol in Figure 1.

Protocol 1 Common Input: An instance , for security parameter . ’s private input: Polynomially many identical copies of a witness for : s.t. . Common Random String: samples the common random string of the NP NIZK argument, and an additional random string where is the size of a public key generated by . publishes as the common random string. Public and Secret Verification Keys: samples public and secret verification keys: Samples , and encrypts the PRF key using the FHE encryption, . Let be the randomness used for . encrypts and computes a NIZK proof , for the NP statement declaring that the tuple is consistent.666Formally, there exist