Forging quantum data: classically defeating an IQP-based quantum test

12/11/2019
by   Gregory D. Kahanamoku-Meyer, et al.
0

In 2009, Shepherd and Bremner proposed a "test of quantum capability" arXiv:0809.0847 that is attractive because the quantum machine's output can be verified efficiently by classical means. While follow-up papers gave evidence that directly simulating the quantum prover is classically hard, the security of the protocol against other (non-simulating) classical attacks has remained an open question. In this paper, I demonstrate that the protocol is not secure against classical provers. I describe a classical algorithm that can not only convince the verifier that the (classical) prover is quantum, but can in fact can extract the secret key underlying a given protocol instance. Furthermore, I show that the algorithm is efficient in practice for problem sizes of hundreds of qubits. Finally, I provide an implementation of the algorithm, and give the secret vector underlying the "25 challenge" posted online by the authors of the original paper.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/14/2020

Lightweight Mediated Semi-Quantum Secret Sharing Protocol

Due to the exiting semi-quantum secret sharing protocol have two challen...
02/13/2021

Multiparty Mediated Semi-Quantum Secret Sharing Protocol

This study proposes the first multiparty mediated semi-quantum secret sh...
04/02/2019

On the Everlasting Security of Password-Authenticated Quantum Key Exchange

Quantum Key Distribution, introduced in 1984 in the seminal paper of Ben...
06/01/2020

The QQUIC Transport Protocol: Quantum assisted UDP Internet Connections

Quantum key distribution, initialized in 1984, is a commercialized secur...
04/25/2019

Message Randomization and Strong Security in Quantum Stabilizer-Based Secret Sharing for Classical Secrets

We improve the flexibility in designing access structures of quantum sta...
01/28/2019

Efficient High-dimensional Quantum Key Distribution with Hybrid Encoding

We propose a schematic setup of quantum key distribution (QKD) with an i...
06/16/2018

Attacks against a Simplified Experimentally Feasible Semiquantum Key Distribution Protocol

A semiquantum key distribution (SQKD) protocol makes it possible for a q...

Code Repositories

IQPwn

Fake quantumness by forging quantum data


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Tests of quantum speedup (or “quantum supremacy” tests) have generated excitement recently as several experiments reach the cusp of demonstrating quantum computational power in the laboratory. In the past decade, numerous protocols have been designed for the purpose of demonstrating a quantum speedup in solving some (possibly contrived) mathematical problem. A difficulty of many of them, however, is that the quantum machine’s output is difficult to verify. In many cases, the best known algorithm for checking the solution is equivalent to solving the full problem classically. This presents a serious issue for validation of the quantum results, because by definition the supremacy regime is that where a classical solution is hard.111In fact, in e.g. the Google team’s recent paper [arute_quantum_2019], checking the solution is harder than simply solving the problem classically.

Figure 1: Mean time to extract the secret vector from -programs constructed as described in [shepherd_temporally_2009]

. Shaded region is the first to third quartile of the distribution of runtimes. We observe that the time is polynomial and fast in practice even up to problem sizes of hundreds of qubits. See Section

3.2 for a discussion of the scaling. The data points were computed by applying the algorithm to 1000 unique -programs at each problem size. The secret vector was successfully extracted for every -program tested. Experiments were completed using one thread on an Intel 8268 "Cascade Lake" processor.

In 2009, Shepherd and Bremner introduced an efficiently-verifiable protocol [shepherd_temporally_2009] that places only meager requirements on the quantum device, making it a good candidate for near-term hardware. It requires only sampling from a quantum circuit in which the gates all commute (a class introduced by those authors as ). Furthermore, the authors demonstrate in follow-up papers [bremner_average-case_2016, bremner_classical_2011] that classically sampling from the distribution should be hard, suggesting a “black-box” approach to cheating classically (by simply simulating the quantum device) is indeed computationally difficult, and only a couple hundred qubits would be required to make a classical solution intractable.

Importantly, however, the classical verifier in [shepherd_temporally_2009] doesn’t actually check whether the prover’s samples come from the correct distribution (in fact, [bremner_classical_2011] suggests doing such a check efficiently is not possible). Instead, the sampling task is designed such that bitstrings from its distribution will be orthogonal to some secret vector

with high probability, and it is this property that is checked. A question that has remained open is whether a classical machine could generating samples satisfying the orthogonality check

without actually simulating the distribution. In this paper I show that the answer is yes. I give an explicit algorithm that can extract the secret vector underlying an instance of the protocol, thus making it trivial to generate orthogonal samples that pass the verifier’s test. The main results of this paper are a statement of the algorithm, a proof that a single iteration of it will extract the secret vector with probability (Theorem 3.1)222This probability can be made arbitrarily close to 1 by repetition., and empirical results demonstrating that it is efficient in practice (summarized in Figure 1).

The following is a summary of the paper’s structure. In Section 2, I review some points from the original paper [shepherd_temporally_2009] that are especially relevant to the analysis here. In Section 3 I describe the algorithm to extract the secret key, and therefore break the protocol’s security against classical provers. There I also discuss briefly my implementation of the algorithm. In Section 4 I discuss related protocols, and provide the secret key underlying the “$25 challenge” posted to the web by the authors of the original paper.

2 Background

Overview of protocol

At the core of the protocol in [shepherd_temporally_2009] is a sampling problem. The classical verifier generates a Hamiltonian consisting of a sum of products of Pauli

operators, and asks the quantum prover to sample the probability distribution arising from the state

. The Hamiltonian is not exactly random, but instead is designed such that the samples are biased such that 333This inner product is over ; all arithmetic in this paper is modulo 2 unless otherwise noted. with high probability for some secret vector . The classical verifier, with knowledge of , can quickly check that the samples have such a bias. Since should be only known to the verifier, it is conjectured in [shepherd_temporally_2009] that the only efficient way to generate such samples is by actually doing the evolution. In Section 3 I show that it is possible to extract classically from just the description of the Hamiltonian.

X-programs

A Hamiltonian of the type used in this protocol can be described by a rectangular matrix of binary numbers, for which each row corresponds to a Hamiltonian term. Given such a matrix (called an “-program”), the Hamiltonian is

(2.1)

In words, a 1 in at row and column corresponds to the inclusion of a Pauli operator on the site in the term of the Hamiltonian. The -program also has one additional parameter , which is the “action”—the integrated energy over time for which the Hamiltonian will be applied.

I note here that the original paper discusses matroids rather than matrices. The perspective of this paper is that we have been given an explicit matrix acting as a canonical representative for the relevant matroid; it will be sufficient here to simply discuss as a matrix, and I will do so for the rest of the paper.

Embedding a bias and verifying the output

In order to bias the output distribution along , a submatrix with special properties is embedded within the matrix . For a vector and matrix , we can define the submatrix as that which is generated by deleting all rows of that are orthogonal to . Under this notation, our relevant submatrix is where is the secret vector. For the output distribution to be appropriately biased, [shepherd_temporally_2009] suggests that should correspond to the generator matrix of an error-correcting code—in particular, the authors suggest using a quadratic residue code and setting the action . As described below, this choice leads to a gap between the quantum and classical probabilities of generating samples orthogonal to (for the best known classical strategy in [shepherd_temporally_2009]). The verifier’s check is simply to request a large number of samples, and then determine if the fraction orthogonal to is too large to have likely been generated by the classical distribution.

In the two Facts below, I recall the probabilities from [shepherd_temporally_2009] corresponding to that paper’s quantum and classical strategies. The reasoning behind the classical strategy (Fact 2.2) is crucial to the rest of this paper; it is worth understanding its proof before moving on to the algorithm in Section 3.

Fact 2.1.

Quantum strategy

Let be an -program which has an embedded submatrix for some secret vector , such that is the generator matrix for a quadratic residue code up to permutation of rows. Let

be a random variable representing the distribution of bitstrings from an

-qubit quantum state measured in the basis, where is defined as in Equation 2.1. Then,

(2.2)
Proof.

The proof is contained in [shepherd_temporally_2009]. ∎

Fact 2.2.

Classical strategy of [shepherd_temporally_2009]

Let be two bitstrings of length (the length of a row of ). Define as the matrix generated by deleting the rows of orthogonal to or .444In [shepherd_temporally_2009], is written as . Let be the vector sum of the rows of . Letting be the random variable representing the distribution of when and are chosen uniformly at random, then

(2.3)
Proof.

With defined as above, we have

(2.4)

By defintion, if . Therefore is equivalent to simply counting the number of rows in both and , or equivalently, counting the rows in for which and are both 1. We can express this using the matrix-vector products of with and :

(2.5)
(2.6)

Considering that is the generator matrix for an error correcting code, I denote as the encoding of under . In this notation, we have

(2.7)

Now, we note that the quadratic residue code (for which is a generator matrix) has the property that any two codewords and have iff either or has even parity.555This can be seen from the fact that the extended quadratic residue code, created by adding a single parity bit, is self-dual (and all extended codewords have even parity). [shepherd_temporally_2009] Half of the quadratic residue code’s words have even parity and and are random codewords, so the probability that either of them has even parity is . Thus, the probability that is , proving the fact. ∎

In the next section, we show that the classical strategy just described can be improved.

3 Algorithm

The classical strategy described in [shepherd_temporally_2009] and reproduced in Fact 2.2 above generates vectors that are orthogonal to with probability . The key to this paper is that it is possible to correlate the vectors generated by that strategy, such that with probability one may generate a large set of vectors that all are orthogonal to . When that happens, they form a system of linear equations that can be solved to yield . Finally, with knowledge of it is trivial to generate samples that pass the verifier’s test.

To generate such a correlated set, we follow a modified version of the original classical strategy. Instead of choosing random bitstrings for both and , we hold constant, only choosing new values for for each vector. Crucially, if the encoding of under has even parity, all of the generated vectors will have . (See Theorem 3.1). This will happen with probability over our choice of (whenever has even parity).

In practice, it is more convenient to do the linear solve if all instead of 0. This can be accomplished by adding a vector with to each . It turns out that has this property; see proof of Theorem 3.1.

The explicit algorithm for extracting the vector is given in Algorithm 1.

  1. Let .

  2. Pick .

  3. Generate a large number (say ) of vectors , forming the rows of a matrix . For each:

    1. Pick

    2. Let

  4. Via linear solve, find the set of vectors satisfying , where is the vector of all ones.

  5. For each candidate vector :

    1. Extract from by deleting the rows of orthogonal to

    2. If has the properties of a quadratic residue code up to row reordering (i.e. codewords have ), return and exit.

  6. No candidate vector was found; return .

Algorithm 1 ExtractKey
The algorithm to extract the secret vector from an -program .  is the number of columns in the -program, and means “select uniformly from the set.”

3.1 Analysis

Theorem 3.1.

On input an -program containing a unique embedded submatrix that is a generator matrix for the quadratic residue code (up to rearrangement of its rows), Algorithm 1 will output the corresponding vector with probability .

Proof.

If is contained in the set generated in step 3 of the algorithm, the correct vector will be output via the check in step 4 because there is a unique submatrix corresponding to the quadratic residue code. will be contained in as long as satisfies the equation . Thus we desire to show that with probability .

Each row of is

(3.1)

for a vector defined as

(3.2)

Here I will show that always and for all with probability , implying that with probability .

First I show that . is the sum of all rows of , so we have

(3.3)

We see that the inner product is equal to the number of rows in the submatrix . This submatrix is a generator matrix for the quadratic residue code, which has a number of rows equal to a prime

; the number of rows is odd and thus

(3.4)

Now I turn to showing that for all with probability . In the proof of Fact 2.2, it was shown that for any two vectors and , vectors generated by summing rows of for which have

(3.5)

where and are the encodings under of and respectively. If is held constant for all , and happened to be chosen such that has even parity, then for all by Equation 3.5. Because half of the codewords of the quadratic residue code have even parity, for selected uniformly at random we have for all with probability .

I have shown that always and for all with probability . Therefore we have

Thus with probability . The algorithm will output whenever , proving the theorem. ∎

 

Having established that the algorithm outputs with high probability (that can be made arbitrarily close to 1 by repetition), we now turn to analyzing its runtime.

Claim 3.1.

(empirical) Algorithm 1 halts in time on average.

All steps of the algorithm except for step 4 have scaling by inspection. The obstacle preventing Claim 3.1 from trivially holding is that it is hard to make a rigorous statement about how large the set of candidate vectors is. Because , we’d like to show that on average, the rank of is close to or equal to . It seems reasonable that this would be the case: we are generating the rows of by summing rows from , and must have full rank because it contains a rank- error correcting code. But the rows of summed into each are not selected independently—they are always related via their connection to the vectors and , and it’s not clear how these correlations affect the linear independence of the resulting .

Figure 2: (a) The average number of candidate vectors checked before the secret vector was found, when the algorithm was applied to 1000 unique -programs at each problem size tested. We observe that the number of vectors to check is constant in . (b)

The number of unconstrained degrees of freedom

for matrices generated in step 3 of Algorithm 1, for “good” choices of such that . The rapidly decaying tail implies that it is rare for any more than a few degrees of freedom to remain unconstrained. The blue bars represent the distribution over 1000 unique -programs of size . The algorithm was then re-run on the -programs that had to generate the orange bars.

Despite the lack of a proof, empirical evidence supports Claim 3.1 when the algorithm is applied to -programs generated in the manner described in [shepherd_temporally_2009]. Figure 2(a) shows the average number of candidate keys checked by the algorithm before is found, as a function of problem size. The value is constant, demonstrating that the average size of the set does not scale with . Furthermore, the value is small—only about 4. This implies that usually has high rank. In Figure 2(b) I plot explicitly the distribution of the rank of the matrix over 1000 runs of the algorithm on unique -programs of size . The blue bars (on the left of each pair) show the distribution over all -programs tested, and the sharply decaying tail supports the claim that low-rank almost never occur.

A natural next question is whether there is some feature of the -programs in that tail that causes to be low rank. To investigate that question, the algorithm was re-run 100 times on each of the -programs that had in the blue distribution. The orange bars of Figure 2(b) (on the right of each pair) plot the distribution of for that second run. The similarity of the blue and orange distributions demonstrates that the rank of is not correlated between runs; that is, the low rank of in the first run was not due to any feature of the input -programs. From a practical perspective, this data suggests that if the rank of is found to be unacceptably low, the algorithm can simply be re-run with new randomness and the rank of is likely to be higher the second time.

3.2 Implementation

An implementation of Algorithm 1 in the programming language Julia is available online at github.com/GregDMeyer/IQPwn. In that repository is also code for generating the figures in this paper. Figure 1 shows the runtime of this implementation for various problem sizes. Experiments were completed using one thread on an Intel 8268 "Cascade Lake" processor.

Note that Figure 1 shows scaling, rather than from Claim 3.1. This is due to data-level parallelism in the implementation. vectors are stored as the bits of 64-bit integers, so operations like vector addition can be performed on 64 elements at once via bitwise operations. Furthermore, with AVX SIMD CPU instructions, those operations can be applied to multiple 64-bit integers in one CPU cycle. Thus, for of order 100, the ostensibly vector inner products and vector sums are performed in constant time, removing one factor of from the runtime. The tests in Figure 1 were performed on a CPU with 512 bit vector units.

4 Discussion

Modifications to the protocol

A natural question is whether it is possible to modify the original protocol such that this attack is not successful. Perhaps can be engineered such that either 1) it is not possible to generate a large number of vectors that all have a known inner product with , or 2) the rank of the matrix formed by these generated vectors will never be sufficiently high to allow solution of the linear system.

For 1), our ability to generate many vectors orthogonal to relies on the fact that the extended quadratic residue code is even and self-dual. These characteristics imply that if has even parity, all of our subsequently generated vectors will have . Building via a code without the self-dual property would remove this possibility, though the challenge is to do so while still maintaining the bias in the quantum case, and without opening up a new avenue to discovering . I leave that pursuit open.

For 2), the main obstacle is that the matrix must have rank because embedded in it is a code of rank . The only hope is to somehow engineer the matrix such that linear combinations generated in the specific way described above will not be linearly independent. It’s not at all clear how one would do that, and furthermore, adding structure to the previously-random extra rows of runs the risk of providing even more information about the secret vector . Perhaps one could prove that the rank of will be large even for worst-case inputs —this would also be an interesting future direction.

Protocols with provable hardness

The attack described in this paper reiterates the value of building protocols for which passing the check itself, rather than just simulating the quantum device, can be shown to be hard under well-established complexity-theoretic assumptions. For example, the protocol given in [brakerski_cryptographic_2019] is secure under the hardness assumption of Learning With Errors. Unfortunately, such rigorous results come with a downside, which is an increase in the size and complexity of circuits that must be run on the quantum device. Exploring simplified protocols that are provably secure is an interesting area for further research.

Complexity theoretic implications

Conjecture 3.2 of [shepherd_temporally_2009] supposes whether the language of matroids with a hidden sub-matroid is -complete. In rough terms, it asks if a machine can efficiently decide whether a given matrix contains a hidden submatrix corresponding to a generator matrix for the quadratic residue code (up to permutations of rows). The results in this paper cannot make a strong statement about this conjecture; I have only empirically established that Algorithm 1 halts in polynomial time in the average case. It’s possible that there exist some class of worst-case instances of for which the algorithm is not efficient; this would be an interesting area for further research.

The $25 challenge

At quantumchallenges.wordpress.com, the authors of [shepherd_temporally_2009] posed a challenge. They posted a specific instance of the matrix , and offered $25 to anyone who could send them samples passing the verifier’s check. The secret vector corresponding to their challenge matrix is (encoded as a base-64 string):

BilbHzjYxrOHYH4OlEJFBoXZbps4a54kH8flrRgo/g==

The code used to extract the secret vector, as well as a set of samples that should pass the check for their challenge matrix, can be found at github.com/GregDMeyer/IQPwn. If you’d like to convert the above key into a binary string, you can use the b64tobin script in the examples/ directory of that repository.

Summary and outlook

In this paper, I have described a classical algorithm that passes the interactive quantum test described in [shepherd_temporally_2009]. I have proven that a single iteration of the algorithm will return the underlying secret vector with probability , and empirically established that it is efficient. The immediate implication of this result is that the protocol from [shepherd_temporally_2009] in its original form is no longer effective as a test of quantumness. While it may be possible to reengineer that protocol to thwart this attack, this paper reiterates the value of proving the security of the verification step. Protocols with provable security are valuable on their own, but can also be used as building blocks for new, more complex results (see, for example, classically verifiable quantum computation in [mahadev_classical_2018] building off the protocol of [brakerski_cryptographic_2019]). As quantum hardware begins to surpass the abilities of classical machines, quantum cryptographic tools will play an important role in making quantum computation available as a service. Establishing the security of these protocols is an important first step.

Acknowledgements

The author is supported by the National Defense Science and Engineering Graduate Fellowship (NDSEG).

References