Quantum distinguishing complexity, zero-error algorithms, and statistical zero knowledge

02/10/2019 ∙ by Shalev Ben-David, et al. ∙ University of Waterloo Microsoft 0

We define a new query measure we call quantum distinguishing complexity, denoted QD(f) for a Boolean function f. Unlike a quantum query algorithm, which must output a state close to |0> on a 0-input and a state close to |1> on a 1-input, a "quantum distinguishing algorithm" can output any state, as long as the output states for any 0-input and 1-input are distinguishable. Using this measure, we establish a new relationship in query complexity: For all total functions f, Q_0(f)=O (Q(f)^5), where Q_0(f) and Q(f) denote the zero-error and bounded-error quantum query complexity of f respectively, improving on the previously known sixth power relationship. We also define a query measure based on quantum statistical zero-knowledge proofs, QSZK(f), which is at most Q(f). We show that QD(f) in fact lower bounds QSZK(f) and not just Q(f). QD(f) also upper bounds the (positive-weights) adversary bound, which yields the following relationships for all f: Q(f) >= QSZK(f) >= QS(f) = Omega(Adv(f)). This sheds some light on why the adversary bound proves suboptimal bounds for problems like Collision and Set Equality, which have low QSZK complexity. Lastly, we show implications for lifting theorems in communication complexity. We show that a general lifting theorem for either zero-error quantum query complexity or for QSZK would imply a general lifting theorem for bounded-error quantum query complexity.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the model of query complexity, we wish to compute some known Boolean function on an unknown input that we can access through an oracle that knows . In the classical setting, the oracle responds with when queried with an index . For quantum models, we use essentially the same oracle, but slightly modified to make it unitary. The bounded-error quantum query complexity of a function , denoted , is the minimum number of queries to the oracle needed to compute the function

with probability greater than

on any input . In other words, the quantum query algorithm outputs a quantum state that is close to .

In this paper we study “quantum distinguishing complexity,” a query measure obtained by relaxing the output requirement of quantum query algorithms. Essentially, a quantum distinguishing algorithm for doesn’t need to compute , but merely needs to behave differently on input and input if . We claim that this weaker notion of computation helps shed light on quantum query complexity and various lower bound techniques for it. We use quantum distinguishing complexity to prove a new query complexity relationship for total functions: . We also use it to explain why the non-negative adversary bound fails for some problems, to provide lower bound techniques for the query version of the complexity class , and to prove some reductions between lifting theorems in communication complexity.

1.1 Quantum distinguishing complexity

The quantum distinguishing complexity of a function (where ), denoted , is the minimum number of queries needed to the input to produce an output state , such that the output states corresponding to -inputs and -inputs are nearly orthogonal (or far apart in trace distance). Note that the usual bounded-error quantum query complexity of a function , denoted , is defined similarly with the additional requirement that there should exist a 2-outcome measurement that (with high probability) accepts states corresponding to -inputs and rejects states corresponding to -inputs. Since measurements can only distinguish nearly orthogonal states, every quantum algorithm for computing satisfies the definition of quantum distinguishing complexity. Hence for all functions , we have . We formally define quantum distinguishing complexity and establish some basic properties in Section 3.

This is a natural relaxation of bounded-error quantum query complexity and has been mentioned in passing in several prior works. Indeed, Barnum, Saks, and Szegedy call this measure in an early technical report [BSS01, Remark 1]. This measure often comes up in discussions about the (positive-weights) adversary bound,111The positive-weights adversary bound should not be confused with the stronger negative-weights adversary bound (also known as the general adversary bound), which essentially equals quantum query complexity [HLŠ07, LMR11]. a lower bound for quantum query complexity introduced by Ambainis [Amb02]. The (positive-weights) adversary bound, which we denote by , has several variants [Amb02, Amb03, BSS03, LM04, Zha05], which are all essentially the same [ŠS06]. It was noted in several works [BSS03, HLŠ07] that the proof that the adversary bound lower bounds quantum query complexity only uses the fact that the outputs corresponding to -inputs and -inputs are nearly orthogonal, and hence for all functions . However, it is not the case that for all , and we exhibit functions separating these measures.

Lastly, we show in Section 3 that this measure is the quantum analogue of a lower bound method for randomized query complexity called randomized sabotage complexity [BK16]. Hence this measure could also be called “quantum sabotage complexity.”

1.2 Fifth power query relation

Our first result establishes a new relation between query measures for total functions. A total function is a function of the form , as opposed to a partial function, which is a function of the form , where . We show a new upper bound on the zero-error quantum query complexity of , denoted , in terms of its quantum distinguishing complexity, and hence its quantum query complexity. The zero-error quantum query complexity of is the minimum number of queries needed by a quantum algorithm that either outputs the correct answer on input , or outputs ? indicating that it does not know, but does this with probability at most on any input . In Section 4 we prove the following.

Theorem 1.

For all total functions , we have


Additionally, the algorithm also outputs a certificate for when it outputs .

This is an improvement over the previous best relationship between zero-error and bounded-error quantum query complexity,  [BBC01], which follows from , where is deterministic query complexity. In fact, our result is the first upper bound on zero-error quantum query complexity that does not follow from an upper bound on zero-error randomized query complexity. Our proof is borrows ideas from the classical result  [Mid05, KT16], which is essentially optimal due to a nearly matching separation by Ambainis et al. [ABB16].

1.3 Quantum statistical zero knowledge

Next we show that, surprisingly, quantum distinguishing complexity lower bounds a more powerful model of computation than quantum query complexity: the query complexity of computing a function using a quantum statistical zero-knowledge (QSZK) proof system. A QSZK proof system is an interactive protocol between a quantum verifier and a computationally unbounded, but untrusted prover in which the verifier learns the value of but learns essentially no more. QSZK can also be characterized in terms of its complete problem Quantum State Distinguishability [Wat02, Wat09].

In Section 5, we discuss the history of quantum statistical zero-knowledge proofs and define an associated query measure based on the complete problem Quantum State Distinguishability. We establish some basic properties of our definition, such as , which corresponds to the complexity class containment . We then show that quantum distinguishing complexity lower bounds QSZK complexity.

Theorem 2.

For all (partial) Boolean functions , .

As a corollary of Theorem 2 and , we have for all (partial) functions ,


This sheds some light on why the adversary bound sometimes proves poor lower bounds: it lower bounds a more powerful model of computation! For example, it is well known that the adversary bound cannot prove a super-constant lower bound for the collision problem [AS04]. It is also easy to see that the collision problem has a constant-query QSZK (and even classical SZK) protocol.

On the bright side, this gives us a new way to prove lower bounds on QSZK query complexity and prove oracle separations against the complexity class . For example, since we know the OR function on bits has , this yields an oracle such that , since the OR function has small certificates. A similar strategy was used recently by Menda and Watrous to show oracle separations against  [MW18].

1.4 Comparison with other lower bounds

We compare quantum distinguishing complexity to the two main lower bound techniques for quantum query complexity: the (positive-weights) adversary bound and the polynomial method. Recall that the negative-weights adversary or general adversary completely characterizes quantum query complexity, so we do not compare quantum distinguishing complexity with it.

As noted earlier, the adversary bound is weaker than quantum distinguishing complexity since for all (partial) functions , . This implies that coincides with for most functions studied in the literature, since most quantum lower bounds are proved using the adversary method. Moreover, not only is quantum distinguishing complexity always larger than the adversary bound, it can be exponentially larger for partial functions and quadratically larger for total functions as we show in Theorem 3.

Another popular lower bound technique is the polynomial method [BBC01], which uses the fact that the approximate degree of a function lower bounds . The approximate degree of a Boolean function , denoted , is the minimum degree of a real polynomial over the input variables such that for all inputs we have .

Figure 1: Relationships between measures. An upward line indicates that a measure is asymptotically upper bounded by the other measure. E.g., for all (partial) functions , .

We do not know an exponential separation between quantum distinguishing complexity and approximate degree (for a partial function), since it is not even known if quantum query complexity can be exponentially larger than approximate degree for a partial functions. We do, however, show in Theorem 3 that quantum distinguishing complexity can be polynomially larger than approximate degree for total functions.

Theorem 3.

There exist total functions and with


There also exists an -bit partial function with


This theorem is proved in Section 6. Figure 1 shows the known relationships between all the measures discussed in this paper. The measures and are introduced later, and refer to randomized sabotage complexity and quantum certificate complexity, respectively.

1.5 Lifting theorems

Most measures in query complexity have an analogous measure in communication complexity, which we denote with the superscript cc, such as and . A lifting theorem is a result that transfers a lower bound on a query function to a lower bound in communication complexity for a lifted version of the function , obtained by composing the function with a hard communication problem . For example, a lifting theorem is known for deterministic protocols, which means there exists a communication problem such that for all functions ,  [RM99, GPW15].

Lifting theorems have been shown for some measures, such as nondeterministic query complexity [GLM16] and (zero-error or bounded-error) randomized query complexity [GPW17], and remain open for measures like zero-error and bounded-error quantum query complexity. Our next result, proved in Section 7, shows that if we could prove a lifting theorem for zero-error quantum query complexity or for QSZK query complexity, then we would get a lifting theorem for bounded-error quantum query complexity.

Theorem 4 (informal).

If a general lifting theorem holds using some gadget for either zero-error quantum query complexity, i.e., , or for quantum statistical zero-knowledge protocols, i.e., , then we obtain a general lifting theorem for bounded-error quantum query complexity (up to logarithmic factors) with the same gadget .

In fact, the same conclusion follows from a weaker assumption. We can assume that the lifting theorem proves a lower bound on bounded-error quantum communication complexity assuming a lower bound on quantum distinguishing complexity. In other words, we can assume a lifting theorem of the form , which is weaker than a QSZK lifting theorem since it assumes a stronger lower bound and proves a weaker one.

2 Preliminaries

We assume the reader is generally familiar with quantum computation [NC00] and query complexity (for more details, see [BdW02]). We do not assume the reader is familiar with statistical zero-knowledge protocols.

For any positive integer , let . We use to mean there exists a constant such that and similarly means for some constant .

2.1 Distance measures

For any matrix , we define the spectral norm of , denoted

as the largest singular value of

. The -norm of , denoted , is defined as , which is also equal to the sum of the singular values of .

We define the trace distance between two quantum states and as . The factor of makes this distance measure lie between and for density matrices. Trace distance is a useful distance measure since it exactly captures distinguishability of states and is non-increasing under quantum operations [NC00, Th. 9.2]. For pure states and , trace distance is related to their inner product as follows [Wat18, eq. 1.186].


2.2 Quantum query complexity

In query complexity, we wish to compute a Boolean function on an input given query access to the bits of . In this paper, we will mostly deal with functions with Boolean input and output. An -bit function is called a total function. An -bit function , where , is called a partial function since it is defined on a subset of . We will also refer to this subset as the domain of , or . The goal in query complexity is to compute while making the fewest queries to the oracle for the bits of .

Classical algorithms have access to an oracle that given an index outputs , the bit of . A quantum algorithm is allowed access to a unitary map that implements this oracle, and is usually taken to be the unitary which acts as follows on inputs and : . A quantum algorithm that uses the gate in its circuit times is said to have made queries to the oracle.

Since we do not count the complexity of any other gates used in the algorithm, we can assume a -query quantum algorithm always starts with the all-zeros state and applies an oracle-independent unitary followed by the oracle and so on. Thus a -query quantum algorithm is specified by oracle-independent unitaries , which act on

output qubits. The state output by the quantum algorithm is

, where is implicitly if acts on more qubits than . If the quantum algorithm outputs a mixed state, then we assume it traces out some subset of the qubits, and hence outputs . If the quantum algorithm outputs a bit, then we assume it measures the first qubit in the standard basis and outputs the result of that measurement.

We can now define the various complexity measures associated with quantum query complexity. We say the bounded-error quantum query complexity of computing a Boolean function , , is the minimum such that there exists a -query quantum algorithm that on every outputs with probability greater than or equal to . As usual, the constant

is unimportant as long as it is a constant strictly greater than half, due to standard error reduction.

A zero-error quantum algorithm (or a Las Vegas quantum algorithm) never outputs an incorrect answer on an input , but is allowed to claim ignorance and answer ? with probability at most . The zero-error quantum query complexity of , is the minimum number of queries needed for a zero-error quantum algorithm to compute . Note that , since a zero-error algorithm can be turned into a bounded-error algorithm by simply outputting a random bit when the zero-error algorithm outputs ?.

For zero-error quantum algorithms, there is a subtlety to do with whether or not the algorithm also produces a classical certificate for the input . A certificate for is a subset of bits of , such that the value of is completely determined by reading these bits alone. A classical zero-error algorithm can always be assumed to output such a certificate without loss of generality. However, this is not known to be true for zero-error quantum algorithms, and zero-error quantum algorithms that also output a certificate when they output a non-? answer are called self-certifying algorithms [BCdWZ99]. All the zero-error quantum algorithms in this paper are self-certifying, which makes our results stronger since we only prove upper bounds on zero-error quantum algorithms.

3 Quantum distinguishing complexity

3.1 Definition

We now define quantum distinguishing complexity more formally. As explained in the introduction, instead of requiring that the quantum algorithm output the value of the function , as in standard quantum query complexity, we only want the quantum algorithm’s outputs to be distinguishable (or nearly orthogonal) for -inputs and -inputs.

As an example of how these definitions differ, consider the collision problem. In this problem, we are given an input and we are promised that if we view as a function from , the function is either -to- or -to-. The goal is to distinguish these two cases under the assumption that the input satisfies this promise. In this problem, since every -input and -input differ in exactly half the positions , our quantum algorithm can simply create the state and the states corresponding to -inputs and -inputs will have trace distance . Thus this problem has quantum distinguishing complexity , but its quantum query complexity is  [AS04].

Definition 5 (Quantum Distinguishing complexity).

Let , where , be an -bit partial function. is defined as the smallest integer such that there exists a -query quantum algorithm that on input outputs a quantum state such that


Note that the definition is robust to minor changes. First, we allow outputting mixed states, although this does not offer any additional power over only outputting pure states. The reason is that we can always assume that the quantum algorithm is pure until the final step where some subset of qubits is traced out. But if two states are far apart in trace distance after a partial trace, then they were far apart to begin with since trace distance is non-increasing under partial trace.

The constant in Definition 5 is also arbitrary and any constant in would not change the measure by more than a multiplicative constant. This is because we can increase the trace distance between the states by outputting multiple copies of the states. We choose the constant purely for aesthetic reasons: This choice ensures that the result in Theorem 2 has no constant factors.

3.2 Properties

We can now establish some basic properties of quantum distinguishing complexity. First, let us formally show that quantum distinguishing complexity lower bounds quantum query complexity.

Proposition 6.

For all (partial) Boolean functions , .


Let and consider the -query algorithm that witnesses this fact. Let be the probability that this -query algorithm, when run on input , outputs upon measuring the first qubit. Since the algorithm computes with bounded error, we know that for all -inputs , , and for all -inputs , .

Now consider the single-qubit state , which is obtained by taking the final state of this algorithm, tracing out all the qubits except the first one, and then applying a completely dephasing channel to it. This state is . Thus for all with , . ∎

As noted in the introduction, quantum distinguishing complexity is also lower bounded by the adversary bound, i.e.,


We do not prove this since this follows from the arguments that establish that the adversary bound is a lower bound on quantum query complexity [Amb02, Amb03, BSS03, LM04, Zha05, ŠS06], since all these proofs only use the fact that the states output on -inputs and -inputs are nearly orthogonal.

Quantum distinguishing complexity is also superior to quantum certificate complexity , as we show in Proposition 8. Quantum certificate complexity is a lower bound on quantum query complexity defined by Aaronson [Aar08]. It was later shown that quantum certificate complexity also lower bounds approximate polynomial degree [KT16].

Before proving Proposition 8, we first define certificate complexity, randomized certificate complexity, and quantum certificate complexity.

Definition 7 (Certificate complexity).

For any (partial) function and input , consider the partial function defined on the domain that satisfies and for all with .

We define the certificate complexity of , denoted , the randomized certificate complexity of , denoted , and the quantum certificate complexity of , denoted , as follows:


The problem is clearly no harder than computing itself in any model of computation, and hence these are lower bounds on their respective measures, i.e., , , and . We can now prove that is a better lower bound on than .

Proposition 8.

For all (partial) Boolean functions , .


Let and consider the -query quantum algorithm that witnesses this fact. We can use this algorithm to solve for any . Consider the output of the algorithm on input before the partial trace operation and call this . The trace distance between and for with is at least since trace distance is non-increasing under partial trace [NC00, Th. 9.2].

Now we construct an algorithm for from this algorithm to show that . To do so, we run the supposed algorithm and measure whether the output state is or not and accept only when the measurement accepts. This yields an algorithm that outputs on with probability and accepts inputs with with some constant probability strictly less than . More precisely, the acceptance probability is due to the relationship between inner product and trace distance for pure states. Repeating this algorithm a constant number of times yields a bounded-error quantum algorithm for . ∎

3.3 Relation with randomized sabotage complexity

We start by reviewing the definition of randomized sabotage complexity, as presented in [BK16]. Fix a (partial) Boolean function with . For any pair such that , let be the partial assignment of all bits where and agree (with the symbol used for the bits where and disagree). We call a “sabotaged input”, imagining that a saboteur replaced bits of with symbols until it was no longer possible to determine .

Let be the set of all sabotaged inputs to , that is, the set of all partial assignments that are consistent with both a -input and a -input to . Let be the same as , except that the symbol is used instead of the symbol. Finally, let be the function that takes a sabotaged input and identifies whether it has symbols or symbols, promised that it contains only one type of symbol. Intuitively, is a decision problem that forces an algorithm computing it to find a or . We then define , the expected running time of a zero-error randomized algorithm computing .

To show that is larger than for all , we will define a classical measure analogous to . We will then show this measure is equivalent to .

Definition 9 (Randomized distinguishing complexity).

Let , where , be an -bit partial function. is defined as the smallest integer such that there exists a -query randomized algorithm that on input

outputs a sample from a probability distribution

such that


where stands for the total variation distance between probability distributions.

Since quantum algorithms can simulate classical algorithms, we immediately get that . Next, we will show that , completing the argument that .

Theorem 10.

Let be a partial Boolean function. Then .


First, we show that . Let be an optimal randomized algorithm for , that on input outputs a sample from the distribution . Let be a sabotaged input, and consider running on . Since is sabotaged, there are inputs and with that are both consistent with the non-, non- bits of . The variation distance between and is at least .

A randomized algorithm can be viewed as a probability distribution over deterministic algorithms. Split the support of the distribution for into two parts: a set consisting of deterministic algorithms that, when run on , query a or , and a set consisting of deterministic algorithms that don’t query a or when run on . Note that algorithms in behave the same on and . If samples an algorithm from with probability , the total variation distance between the run of on and the run of on must therefore be at most . Since this is at least , we have . Hence when is run on , it queries a or with probability at least .

If we repeat whenever it does not query a or , we get an algorithm that always finds such an entry and uses at most queries on expectation. This is a zero-error randomized algorithm for , so .

We now handle the other direction, showing . Let be an optimal zero-error randomized algorithm for . It makes queries on expectation, and always finds a or in any sabotaged input. Consider the algorithm that, on input , runs for at most queries and outputs the partial assignment it queried (that is, it outputs all the pairs that were queried by the algorithm ).

Let and be inputs to with . Let be the sabotaged input defined by and , that is, if and otherwise. By Markov’s inequality, after queries, finds a with probability at least when it is run on . This means that when is run on , it queries an index for which with probability at least . When this happens, the output of is not in the support of . This means puts weight at least on symbols not in the support of . Conversely, puts weight at least on symbols not in the support of . The total variation distance between the two distributions is therefore at least , meaning is a valid algorithm. We conclude that . ∎

Combined with , this theorem gives us the following corollary.

Corollary 11.

For all (partial) Boolean functions , .

4 Fifth power query relation

In this section we prove a new relationship between zero-error quantum query complexity and quantum distinguishing complexity and bounded-error quantum query complexity, restated below.

See 1

Our proof uses ideas from an analogous classical result [Mid05, KT16] and the main quantum ingredient used is the hybrid argument of Bennett, Bernstein, Brassard, and Vazirani [BBBV97]. We now describe and prove a version of the hybrid argument that we use.

4.1 Hybrid argument

We start by defining the concept of a sensitive block. For a string and a subset of input bits , which we call a block, we use to denote the input with all bits in flipped. In other words, agrees with on all positions outside and disagrees on . For a function and an input , we say a block is a sensitive block if .

Now any algorithm that computes must also be able to distinguish from , where is a sensitive block. Any algorithm that can distinguish from must “look at” the bits in in some informal sense. For classical algorithms, this simply means the algorithm has to query a bit from with high probability. The analogous statement for quantum algorithms is not so clear, since quantum algorithms can query all input bits in superposition. Nevertheless, the hybrid argument still allows us to formalize this intuition in the quantum setting. The hybrid argument asserts that the total weight of queries within the sensitive block (i.e., the total sum of probabilities of querying within the sensitive block over the course of the algorithm) cannot be too small [BBBV97]:

Lemma 12 (Hybrid Argument).

Let be an input, and let be a block. Let be a -query quantum algorithm that accepts and rejects with high probability, or more generally produces output states that are a constant distance apart in trace distance for and .

Let be the probability that, when is run on for queries and then subsequently measured, it is found to be querying position of (i.e., the query register collapses to ). Then


Note that for a randomized algorithm, we would have on the right-hand side instead of , since a randomized algorithm must look within (with high probability) at some point during its execution. This lemma was implicitly proven in [BBBV97]. We reproduce the proof in Appendix A for the reader’s convenience.

4.2 New upper bound

To prove our result we also need to upper bound the number of minimal sensitive blocks of a function. It is not too hard to show that any minimal sensitive block has size at most the sensitivity of , , which is the maximum number of sensitive blocks of size over all inputs . Since there are at most different subsets of positions of size , we know that the number of minimal sensitive blocks is at most this quantity. Kulkarni and Tal [KT16] improve this simple upper bound replacing with randomized certificate complexity (Definition 7).

Lemma 13.

For any total function and any input , the number of minimal sensitive blocks of with respect to is at most .

We are now ready to prove Theorem 1.

Proof of Theorem 1..

Let be the optimal quantum distinguishing algorithm for , that uses queries. Consider running the following quantum algorithm on oracle input :

  1. Pick uniformly at random.

  2. Run on for queries and measure the query register.

  3. Write down (on a classical tape) the position where is found to be querying, as well as the query output .

The algorithm uses quantum queries. Now that the probability wrote down the index is . For any block , the probability that wrote down some index in is


If is a sensitive block for the input , then the hybrid argument (Lemma 12) implies the probability that our new algorithm outputs an index in is .

Next, we repeat the algorithm several times. We claim that after repetitions, the outputs of constitute a certificate for with constant probability.

To see this, note that for any minimal sensitive block of the input , the probability that some run of (out of the many runs) queries in the block is . This is because repetitions boost the probability of querying in a minimal sensitive block from to , and then repetitions of this boosted algorithm further boost the probability to the claimed bound. Hence, by Lemma 13 and the union bound, there is a constant probability that these runs of query a bit in every minimal sensitive block of the input . But a set of bits that intersects every sensitive block of is a certificate for . Thus these runs of output a certificate for the input with constant probability.

Any algorithm that finds a certificate with constant probability can be turned into a zero-error algorithm by repeating whenever a certificate is not found. We therefore get a zero-error algorithm that works simply by repeating a sufficient number of times. Note that uses quantum queries and must be repeated times. Recalling that , we get


We can simplify this to , since  [Aar08] and (Proposition 8). ∎

5 Quantum statistical zero knowledge

5.1 History

The subject of statistical zero-knowledge proof systems has a rich history in the classical setting, and the interested reader is referred to the paper of Sahai and Vadhan [SV03]. Informally, the complexity class contains problems that can be solved by a probabilistic polynomial-time verifier interacting with a computationally unbounded prover (like the class ) with the additional restriction that the verifier not learn anything from the prover (statistically) other than the answer to the problem. From this it is clear that , since the verifier can simply not interact with the prover, and , since is simply without the zero-knowledge constraint.

More surprisingly, it is also known that , and that we can assume without loss of generality that the interaction is only one round and uses public randomness, which means . Another interesting subtlety is that can be defined assuming an honest verifier, one who does not deviate from the protocol to learn more, or a cheating verifier, who may deviate from the protocol. It turns out that these definitions lead to the same complexity class [GSV98]. The class also has a much simpler characterization in terms of a complete problem called statistical difference, as shown by Sahai and Vadhan [SV03], which yields easier proofs of some of these facts. Informally, in the statistical difference problem we are given two circuits that sample from probability distributions, and the task is determine whether the distributions are far or close in total variation distance.

On the quantum side, (honest-verifier) was first defined by Watrous [Wat02], and like the classical case, it satisfies . The same paper strengthened these obvious containments by showing that is closed under complement (i.e., ) and that the protocol can be assumed to be one round, which gives . Watrous also showed that has a complete problem, called quantum state distinguishability, which is a quantum generalization of the statistical difference problem of Sahai and Vadhan. In this problem, we are given two quantum circuits outputting mixed states and have to decide if the states are far apart or close in trace distance. Later, Watrous [Wat09] also showed that honest-verifier and cheating-verifier are the same, as in the classical case.

5.2 Definition

We now define a query analogue of quantum statistical zero-knowledge. Instead of defining in terms of an interactive zero-knowledge protocol for , we use the complete problem characterization by Watrous. This yields a considerably simpler definition of in the query setting.222The complete problem is often used to define SZK (and its variants, like NISZK) in query complexity and communication complexity (for example, see [BCH17, Sub17]). It is not obvious whether the definition via an interactive proof and the definition via the complete problem coincide exactly as the problem is complete under polynomial-time reductions, which may add polynomial overhead.

Definition 14 (Qszk).

Let , where , be an -bit partial function. is defined as the smallest integer such that there exists two quantum query algorithms making queries in total that on input output states and of the same size such that

  • with ,

  • with .

This definition is also robust to some changes. In particular, the constants and can be replaced by any constants and as long as . Hence an alternate definition with instead of and instead of leads to the same complexity measure up to multiplicative constants. This follows from the analogous property of the complexity class , which was shown by Watrous [Wat02] (see Theorem 1 in the conference version or Theorem 5 in the full version for more details).

5.3 Properties

As a sanity check, let us prove the query analog of the obvious containment .

Proposition 15.

For all (partial) Boolean functions , .


Let and consider the -query algorithm that witnesses this fact. Let be the probability that this -query algorithm when run on input outputs upon measuring the first qubit. Since the algorithm computes with bounded error, we know that for -inputs and for -inputs.

Now consider the single-qubit state , which is obtained by taking the final state of this algorithm, tracing out all the qubits except the first one, and then applying a completely dephasing channel to it. This is equivalent to measuring the first qubit in the standard basis and outputting when the result is . This state is . Let us also define as for all .

Now let us check that the conditions of Definition 14 are satisfied by these states. For all inputs , we have . And we know that for -inputs and for -inputs, which completes the proof. ∎

The measure also satisfies another useful property, that . This is the analogue of the result that  [Wat02]. Since we do not use this property, we only provide a sketch of the proof.

Sketch of proof of .

To prove this, we would like to reduce the complete problem to its complement. In other words, we are given two circuits that query an oracle preparing and that are either far apart in trace distance (when ) or close in trace distance (when ). From these circuits, we want to define two new states and , such that these states are far when and were close, and close when and were far. Before starting the transformation, we first boost the parameters and to be extremely close to and respectively. For this sketch we will assume the parameters are exactly and , which means when the states are far, they are perfectly distinguishable (i.e., ), and when they are close, they are equal (i.e., .

To perform this transformation, consider the pure states output by the circuits before tracing out any qubits. Let and be the pure state on registers and , which yields and , respectively when register is traced out. More formally, we have


From the pure states and , we define two new pure states on registers , , , and , as follows:


Note that the only difference between these states is in register . If we have circuits preparing states and , it is easy to see that we can construct circuits preparing and . We now define the states and from these states by tracing out registers and :


We claim that these states satisfy the conditions we require. When , we have that , i.e., the residual state on register for states and is completely distinguishable. In this case, before we trace out registers and , we could implement a unitary on these registers which reads register and writes onto register whether the state in is or . This operation maps the state to the state and only acts on the traced out qubits, which does not affect the qubits that are not traced out, and we have .

When , we have that . In this case we want to show that and are distinguishable. We will show that after applying a specific unitary to these states are tracing out register , in the first case we are left with the state , but in the second case we have , which can be distinguished.

Since , there is a unitary such that . Controlled on the qubit in register , let us apply the unitary to register of and before we trace out registers and , which is equivalent to applying it after tracing out the registers. This makes registers unentangled with the rest of the state, and equal to . In the first case we are left with the state on registers and , while in the second case we have . Tracing out register leaves us with the state in the first case and the maximally mixed state in the second case, as claimed.

5.4 Relation with adversary bound

We have already showed that (Proposition 6) and (Proposition 15). We now show that is actually smaller than .

See 2



and consider the quantum algorithms that witnesses this fact. We claim that the tensor product of outputs of these algorithms already satisfies the conditions in

Definition 5 and hence proves .

To see this, observe that the algorithm outputs the state on input , which satisfies the conditions of Definition 14. More precisely, this means for any and such that and , we know that and . We want to show that


Since trace distance is non-increasing under partial trace, we have and , which imply

Now if we can show the right-hand side is at least , then we are done. To show this, toward a contradiction assume that . Then we have

which contradicts . ∎

As noted, as a corollary of this theorem and , we have for all (partial) functions ,


This can be used to prove lower bounds on QSZK protocols for functions. For example, consider the OR function and let us try to compute it with an interactive protocol without the zero-knowledge requirement. It is easy to see that when , a computationally unbounded prover can simply send over the location of a bit such that , which can be checked using only query. Of course, this protocol leaks information and in particular lets the verifier know the location of a . But is it necessary that an efficient protocol for OR must leak information? Our lower bound says this must be the case, because and hence any zero-knowledge protocol for the function must make queries.

6 Comparison with other lower bounds

In this section, we establish the separations between quantum distinguishing complexity and the adversary bound and the polynomial method claimed in Theorem 3.

To prove this, we will compose known functions with the index function and establish the behavior of quantum distinguishing complexity under composition with the index function. This kind of composition was also studied by Chen [Che16], who used it to show an oracle separation between and .

6.1 Index functions

Let denote the index function, the function that on input with and , outputs the bit of indexed by the string . We wish to study the composition of the index function with an arbitrary Boolean function , but composed only on the first bits of the index function. We’ll denote this composition by . More precisely, if is an -bit function, is a function on bits that evaluates on the first -bit strings to obtain a binary string of length , and then uses to index into the next bits of the input and outputs the bit indexed by .

In addition to the index function, which is total, we will also study a function we call the “unambiguous index function,” . This is a partial function defined similarly to the index function, except that the location of the array pointed to by the first part of the input is “marked,” and we are promised that no other bits of the array are “marked.” More explicitly, the function is defined on bits, with the first bits indexing a pair of adjacent bits in the remainder of the input. So if the first part of the input represents the integer , that means it points to the cells and in the second part of the input. The output of is the first bit of the pair pointed to, i.e., it will be the bit stored at array location . Moreover, we are promised that the second bit of this pair (the bit at array location ) will always be , and also that the second bit in every other pair (i.e., other than the pair , ) will always be .

Intuitively, there is only one strategy to solve , which is to read the first bits and find the cell pointed to. But to solve , there are two good strategies: either read the first bits (and determine ), or search the remainder of the input for the unique position where the second bit of a pair is , which marks the cell pointed to by .

6.2 Index function composition

We now examine the behavior of quantum distinguishing complexity under composition with the Index and Unambiguous Index functions. To prove our result, we need the following strong direct product theorem for quantum query complexity due to Lee and Roland [LR13]:

Theorem 16 (Strong direct product).

Let be a partial Boolean function with , and let be the task of solving independent inputs to simultaneously. Then any quantum algorithm that solves with success probability at least uses queries.

We can now prove our composition theorems.

Theorem 17.

There is a constant such that for any partial function , if , then