Adversarial quantum circuit learning for pure state approximation

06/01/2018 ∙ by Marcello Benedetti, et al. ∙ UCL 0

Adversarial learning is one of the most successful approaches to modelling high-dimensional probability distributions from data. The quantum computing community has recently begun to generalize this idea and to look for potential applications. In this work, we derive an adversarial algorithm for the problem of approximating an unknown quantum pure state. Although this could be done on error-corrected quantum computers, the adversarial formulation enables us to execute the algorithm on near-term quantum computers. Two ansatz circuits are optimized in tandem: One tries to approximate the target state, the other tries to distinguish between target and approximated state. Supported by numerical simulations, we show that resilient backpropagation algorithms perform remarkably well in optimizing the two circuits. We use the bipartite entanglement entropy to design an efficient heuristic for the stopping criteria. Our approach may find application in quantum state tomography.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In February Richard Feynman wrote on his blackboard: ‘What I cannot create, I do not understand’ The Caltech Archives (1988)

. Since then this powerful dictum has been reused and reinterpreted in the context of many fields throughout science. In the context of machine learning, it is often used to describe generative models, algorithms that can generate realistic synthetic examples of their environment and therefore are likely to ‘understand’ such an environment.

Generative models are algorithms trained to approximate the joint probability distribution of a set of variables, given a dataset of observations. Conceptually, the quantum generalization is straightforward; Quantum generative models are algorithms trained to approximate the wave function of a set of qubits, given a dataset of quantum states. This process of approximately reconstructing a quantum state is already known to physicists under the name of quantum state tomography. Indeed, there already exist proposals of generative models for tomography such as the quantum principal component analysis 

Lloyd et al. (2014)

and the quantum Boltzmann machine 

Amin et al. (2018); Kieferová and Wiebe (2017). Other machine learning approaches for tomography have been formulated using the different framework of probably approximately correct learning Aaronson (2007); Rocchetto et al. (2017). Hence, machine learning could provide a new set of tools to physicists. Going the other way, quantum mechanics could provide a new set of tools to machine learning practitioners for tackling classical tasks. As an example, Born machines Cheng et al. (2018); Benedetti et al. (2018) use the probabilistic interpretation of the quantum wave function to reproduce the statistics observed in classical data. Identifying classical datasets that can be modelled better via quantum correlations is an interesting open question in itself Perdomo-Ortiz et al. (2018).

One of the most successful approaches to generative models is that of adversarial algorithms in which a discriminator is trained to distinguish between real and generated samples, and a generator is trained to confuse the discriminator Goodfellow et al. (2014). The intuition is that if a generator is able to confuse a perfect discriminator, then it means it can generate realistic synthetic examples. Recently, researchers have begun to generalize this idea to the quantum computing paradigm Dallaire-Demers and Killoran (2018); Lloyd and Weedbrook (2018) where the discriminator is trained to distinguish between two sources of quantum states. The discrimination of quantum states is so important that it was among the first problems ever considered in the field of quantum information theory Helstrom (1969). The novelty of adversarial algorithms is in using the discriminator’s performance to provide a learning signal for the generator.

But how do generative models stand with respect to state-of-the-art algorithms already in use on quantum hardware? The work on the variational quantum eigensolver Peruzzo et al. (2014) shows that parametrized quantum circuits can be used to extract properties of quantum systems, e.g., the electronic energy of molecules. Similarly, the work on quantum approximate optimization Farhi et al. (2014) shows that parametrized quantum circuits can be used to obtain good approximate solutions to hard combinatorial problems, e.g., the max-cut. All these problems consist of finding the ground state of a well-defined, task-specific, Hamiltonian. However, in generative models the problem is somehow inverted. We ask the question: What is the Hamiltonian that could have generated the statistics observed in the dataset? Although some work has been done in this direction Verdon et al. (2017); Benedetti et al. (2018), much effort is required to scale these models to a relevant size. Moreover, it would be preferable for models to make no unnecessary assumption about the data. These are the aspects where we expect adversarial quantum circuit learning to stand out.

Notably, adversarial quantum circuits do not perform quantum state tomography in the strict sense, since the entries of the target density matrix are never read out explicitly. Instead, they perform an implicit state tomography by learning the parameters of the generator circuit, i.e., an implicit description of the resulting state. This approach does hence not suffer from the exponential cost incurred by the long sequence of adaptive measurements required in standard state tomography. This is because, as we will see, only one qubit needs to be measured in order to train and adapt the circuit. The subtlety here is that an exponential cost could occur through a non-converging training process. However, we did not observe this in practice. Our results also allow for a range of potential applications, which we detail below.

As a first example of interest to physicists, one can use the approach to find a Tensor Network representation of a complex target state. In this scenario, the structure of the generator circuit is set up as a Tensor Network and the method learns its parameters. The only assumption here is that the target state can be loaded to the quantum computer via a physical interface with the external world. As a second example of interest to computer scientists, one can use the approach to ‘compile’ a known sequence of gates to a different or simpler sequence. In this scenario, the target is the state generated by the known sequence of gates, and the generator is the ‘compiled’ circuit. This could have concrete applications such as the translation of circuits from superconducting to ion trap gate sets.

In this manuscript, we start from information theoretic arguments and derive an adversarial algorithm that learns to generate approximations to a target pure quantum state. We parametrize generator and discriminator circuits similarly to other variational approaches, and analyze their performance with numerical simulations. Our approach is designed to make use of near-term quantum hardware to its fullest extent, including for the estimation of the gradients necessary to learn the circuits. Optimization is performed using an adaptive gradient descent method known as resilient backpropagation (Rprop) 

Riedmiller and Braun (1993), which performs well when the error surface is characterized by large plateaus with small gradient, and only requires that the sign of the gradient can be ascertained. We provide a heuristic method to assess the learning, which can in turn be used to design a stopping criterion. Although our simulations are carried out in the context of noisy intermediate-scale quantum computers (NISQ) Preskill (2018), we discuss long-term realizations of the adversarial algorithm on universal quantum computers.

Ii Method

Consider the problem of generating a pure state close to an unknown pure target state , where closeness is measured with respect to some distance metric to be chosen. Hereby we use subscripts and to label ‘generated’ and ‘target’ states, respectively. The unknown target state is provided a finite number of times by a channel. If we were able to learn the state preparation procedure, then we could generate as many ‘copies’ as we want and use these in a subsequent application. We now describe a game between two players whose outcome is an approximate state preparation for the target state.

Borrowing language from the literature of adversarial machine learning, the two players are called the generator and the discriminator. The task of the generator is to prepare a quantum state and fool the other player into thinking that it is the true target state. Thus, the generator is a unitary transformation

applied to some known initial state, say , so that . We will discuss the generator’s strategy later.

The discriminator has the task of distinguishing between the target state and the generated state. It is presented with the mixture , where and

are prior probabilities summing to one. Note that in practice the discriminator sees one input at a time rather than the mixture of density matrices, but we can treat the uncertainty in the input state using this picture. The discriminator performs a positive operator-valued measurement (POVM)

on the input, so that . According to Born’s rule, measurement outcome is observed with probability . The outcome is then fed to a decision rule, a function that estimates which of the two states was provided in input.

A straightforward application of Bayes’ theorem suggests that the decision rule should select the label for which the posterior probability is maximal, i.e.,

. This rule is called the Bayes’ decision function and is optimal in the sense that, given an optimal POVM, any other decision function has a larger probability of error Fuchs (1996). Recalling that is the probability of the correct decision using the Bayes decision function, we can formulate the probability of error as

(1)

We observe that the choice of POVM plays a key role here; the discriminator should consider finding the best possible one. Therefore, we can write the objective function for the discriminator in variational form as

(2)

where the minimization is over all possible POVM elements, and the number of POVM elements is unconstrained.

It was Helstrom who carefully designed a POVM achieving the smallest probability of error when a single sample of is provided Helstrom (1969). He showed that the optimal discriminator comprises two elements, and , which are diagonal in a basis that diagonalizes . When the outcome is observed, the state is labeled as ‘target’, when the outcome is observed the state is labeled as ‘generated’. This is the discriminator’s optimal strategy as it minimizes the probability of error in Eq. 2. Unfortunately, designing such a measurement would require knowledge of the target state beforehand, contradicting the purpose of the game at hand. Yet we now know that the optimal POVM comprises only two elements. Using this information, and plugging Eq. (1) in Eq. (2), we obtain Fuchs (1996)

(3)

where we used from the definition of POVM. We now return to the generator and outline its strategy. Assuming the discriminator be optimal, the generator achieves success by maximizing the probability of error with respect to the generated state . The result is a zero-sum game similar to that of generative adversarial networks Goodfellow et al. (2014) and described by

(4)

where we dropped the constant terms. Now suppose that the game is carried out in turns. On the one side, the discriminator is after an unknown Helstrom measurement which changes over time as the generator plays. On the other side, the generator tries to imitate an unknown target state exploiting the signal provided by the discriminator.

Note that when , the probability of error in Eq. (2) is related to the trace distance between quantum states Nielsen and Chuang (2011)

(5)

This is clearer from the variational definition in the second line. Hence, by playing the minimax game above with equal prior probabilities, we are implicitly minimizing the trace distance between target and generated state. We will use the trace distance to analyze the learning progress in our simulations. In practice though, one does not have access to the optimal POVM in Eq. (5), because that would require, once again, the Helstrom measurement. We discuss this ideal scenario in Section II.4 where we require the availability of a universal quantum computer. We shall now consider the case of implementation in NISQ computers where, due to the infeasibility of computing Eq. (5), we need to design a heuristic for the stopping criterion.

Finally, we note that this game, based on the Bayesian probability of error, assumes the availability of one copy of

at each turn. A more general minimax game could be designed based on the quantum Chernoff bound assuming the availability of multiple copies at each turn Audenaert et al. (2007); Fuchs (1996).

ii.1 Near-term implementation on NISQ computers

We now discuss how the game could be played in practice using noisy quantum computers and no error correction. First, we assume the ability to efficiently provide the unknown target state as an input. In realistic scenarios, the target state would come from an external channel and would be loaded in the quantum computer’s register with no significant overhead. For example, the source may be the output of another quantum computer, while the channel may be a quantum internet.

Second, the generator’s unitary transformation shall be implemented by a parametrized quantum circuit applied to a known initial state. Note that target and generated states have the same number of qubits and they are never input together, but rather as a mixture with probabilities and , respectively, i.e., randomly selected with a certain prior probability. Hence they can be prepared in the same quantum register.

Third, resorting to Neumark’s dilation theorem Neumark (1940), the discriminator’s POVM shall be realized as a unitary transformation followed by a projective measurement on an extended system. Such extended system consists of the quantum register shared by the target and generated states, plus an ancilla register initialized to a known state. Notice that the number of basis states for the ancillary system needs to match the number of POVM elements. Because here we specifically require two POVM elements, the ancillary system consists of just one ancilla qubit. The unitary transformation on this extended system is also implemented by a parametrized quantum circuit. The measurement is described by projectors on the state space of the ancilla and the two possible outcomes, 0 and 1, are respectively associated with labels ‘target’ and ‘generated’.

Depending on the characteristics of the circuits, such as type of gates, depth, and connectivity, we will be able to explore regions of the Hilbert space with the generator, and explore regions of the cone of positive operators with the discriminator.

As a concrete example, assume that the unknown -qubit target state is prepared in the main register . We construct a generator circuit where each gate is either fixed, e.g. a CNOT, or parametrized. Parametrized gates are often of the form where is a real valued parameters and is a tensor product of Pauli matrices. The generator acts on the initial state and prepares in the main register . We then similarly construct a discriminator circuit acting non-trivially on both main register and ancilla qubit . Each gates is either fixed or parametrized as , where is real valued and is a tensor product of Pauli matrices. We measure the ancilla qubit using projectors with

. Collecting parameters for generator and discriminator into vectors

and , respectively, the minimax game in Eq. (4) can be written as with value function

(6)

Each player optimizes the value function in turn. This optimization can in principle be done via different approaches (e.g., gradient-free, first-, second-order methods, etc.) depending on the computational resources available. Here we discuss a simple method of alternated optimization by gradient descent/ascent starting from randomly initialized parameters and . That is, we perform iterations of the form and .

To start with, we need to compute the gradient of the value function with respect to the parameters. The favorable properties of the tensor products of Pauli matrices appearing in our gate definitions allow computation of the analytical gradient using the method proposed in Ref. Mitarai et al. (2018). For the generator, the partial derivatives read

(7)

where

(8)

Note that can be interpreted as two new circuits, each one differing from by an offset of to parameter . Hence, for each parameter , we are required to execute the circuit compositions and on initial state and measure the ancilla qubit. Because these auxiliary circuits have depth similar to that of the original circuit, estimation of the gradient is efficient. Interestingly, up to a scale factor of , the analytical gradient is equal to the central finite difference approximation carried out at .

Similarly, the analytical partial derivatives for the discriminator read

(9)

where

(10)

In this case, for each parameter we are required to execute four auxiliary circuit compositions: and on target state , while and on initial state .

Finally, all parameters are updated by gradient descent/ascent

(11)

where and

are hyperparameters determining the step sizes. Here we rely on the fine-tuning of these, as opposed to Newton’s method which makes use of the Hessian matrix to determine step sizes for all parameters. Other researchers 

Dallaire-Demers and Killoran (2018) designed circuits to estimate the analytical gradient and the Hessian matrix. Such approach requires the ability to execute complex controlled operations and is expected to require error correction. Our approach and others’ Mitarai et al. (2018); Liu and Wang (2018) require much simpler circuits, which is desirable for implementation on NISQ computers.

As we discuss next, accelerated gradient techniques developed by the deep learning community can further improve our method.

ii.2 Optimization by resilient backpropagation

If we could minimize the trace distance in Eq. 5 directly over the set of density matrices, then the problem would be convex Nielsen and Chuang (2011). However, in this paper we deal with a potentially non-convex problem due to the optimization of exponentiated parameters and hence the introduction of sine and cosine functions.

A recent paper McClean et al. (2018) suggested that the error surface of circuit learning problems is challenging for gradient-based methods due to the existence of barren plateaus. In particular, the region where the gradient is close to zero does not correspond to local minima of interest, but rather to an exponentially large plateau of states that have exponentially small deviations in the objective value from that of the totally mixed state. While the derivation of the above statement is for a class of random circuits, in practice we prefer to deal with highly structured circuits Grant et al. (2018); Chen et al. (2018)

. Moreover, here we argue that the existence of plateaus does not necessarily pose a problem for the learning of quantum circuits, provided that the sign of the gradient can be resolved. To validate this claim we refer to the classical literature and argue that similar problems have traditionally occurred also in classical neural network training and allow for efficient solutions.

Typical gradient-based methods update the parameters with steps of the form

(12)

where is the -th parameter at time , is the step size, is the error function to be minimized and its superscript indicates evaluation at . If the step size is too small, the derivatives are also scaled to be too small resulting in a long time to convergence. If the step size is too large, this can lead to oscillatory behavior of the updates or even to divergence. One of the early approaches to counter this behavior was the introduction of a momentum term, which takes into account the previous steps when calculating the current update. The gradient descent with momentum (GDM) reads

(13)

where is a momentum hyperparameter. Momentum methods produce some resilience to plateaus in the error surface, but they lose this resilience when the plateaus are characterized by having very small or zero gradient.

A family of optimizers known as resilient backpropagation algorithms (Rprop) Riedmiller and Braun (1993) is particularly well suited for problems where the error surface is characterized by large plateaus with small gradient. Rprop algorithms adapt the step size for each parameter based on the agreement between the sign of its current and previous partial derivatives. If the signs of the two derivatives agree, then the step size for that parameter is increased multiplicatively. This allows the optimizer to traverse large areas of small gradient with an increasingly high speed. If the signs disagree, it means that the last update for that parameter was large enough to jump over a local minima. To fix this, the parameter is reverted to its previous value and the step size is decreased multiplicatively. Rprop is therefore resilient to gradients with very small magnitude as long as the sign of the partial derivatives can be determined.

We use a variant known as iRprop Igel and Hüsken (2000) which does not revert a parameter to its previous values when the signs of the partial derivatives disagree. Instead, it sets the current partial derivative to zero so that the parameter is not updated, but its step size is still reduced. The hyperparameters and pseudocode for iRprop are described in Algorithm 1.

1:error function , initial parameters , initial step size , minimum allowed step size , maximum allowed step size , step size decrease factor , and step size increase factor
2: and for all
3:repeat
4:     for each  do
5:         if  then
6:              
7:         else if  then
8:              
9:              
10:         else
11:                        
12:               
13:until convergence
Algorithm 1 iRprop Igel and Hüsken (2000)

Despite the resilience of Rprop, if the magnitude of the gradient in a given direction is so small that the sign cannot be determined, then the algorithm will not take a step in that direction. Furthermore, the noise coming from the finite number of samples could cause the sign to flip at each iteration. This would quickly make the step size very small and the optimizer could get stuck on a barren plateau.

One possible modification is an explorative version of Rprop that explores areas with zero or very small gradient at the beginning of training, but still converges at the end of training. First, any zero or small gradient at the very beginning of training could be replaced by a positive gradient to ensure an initial direction is always defined. Second, one could use large step size factors and decrease them during training to allow for convergence to a minima. Finally, an explorative Rprop could remember the sign of the last suitably large gradient and take a step in that direction whenever the current gradient is zero. This way, when the optimizer encounters a plateau, it would traverse the plateau from the same direction it entered. We leave investigation of an explorative Rprop algorithm to future work.

ii.3 Heuristic for the stopping criterion

Evaluating the performance of generative models is often intractable and can be done only via application-dependent heuristics Theis et al. (2016); Salimans et al. (2016). This is also the case for our model as the value function in Eq. (6) does not provide information about the generator’s performance, unless the discriminator is optimal. Unfortunately, we do not always have access to an optimal discriminator (more on this in Section II.4). We now describe an efficient method that can be used to assess the learning in the quantum setting. In turn, this can be used to define a stopping criterion for the adversarial game.

We begin recalling that the discriminator makes use of projective measurements on an ancilla register to effectively implement a POVM. Should the ancilla register be maximally entangled with the main register , its reduced density matrix would correspond to that of a maximally mixed state. Performing projective measurements on the maximally mixed state would then result in uniform random outcomes and decisions.

Ideally, the discriminator would encode all relevant information in the ancilla register and then remove all its correlations with the main register, obtaining a product state . Hereby we use subscript to indicate the state output by the discriminator circuit. This scenario is similar in spirit to the uncomputation technique used in many quantum algorithms Bennett et al. (1997).

The bipartite entanglement entropy (BEE) is a measure that can be used to quantify how much entanglement there is between two partitions

(14)

where and are reduced density matrices obtained by tracing out one of the partitions, i.e., by ignoring one of the registers. The BEE is intractable in general, but here we can exploit its symmetry and compute it on the smallest partition, i.e., the ancilla register . Because this register consists of a single qubit, BEE reduces to

(15)

where is the Bloch vector such that , , and . The three components of the Bloch vector can be estimated using tomography techniques for a single qubit, for which we refer to the excellent review in Ref. Schmied (2016).

There exist a wide range of methods that can be used depending on the desired accuracy, the prior knowledge, and the available computational resources. In this work we consider the scaled direct inversion (SDI) Schmied (2016) method, where each entry of the Bloch vector is estimated independently by measuring the corresponding Pauli operator. This is motivated by the fact that where is the Cartesian unit vector in the direction and . These measurements can be done in all existing gate-based quantum computers we are aware of by applying a suitable rotation followed by a measurement in the computational basis.

We can write a temporary Bloch vector where all expectations are estimated from samples. Due to finite sampling error, there is non-zero probability that the vector lies outside the unite sphere, although inside the unit cube. These cases correspond to non-physical states and SDI corrects them by finding the valid state with minimum distance over all Schatten p-distances. It turns out, this is simply the rescaled vector Schmied (2016)

(16)

The procedure discussed so far allows us to efficiently estimate the BEE in Eq. (15). Equipped with this information, we can now design an heuristic for the stopping criterion.

The reasoning is as follows. Provided that the discriminator circuit has enough connectivity, random initialization of its parameters will likely generate entanglement between main and ancilla registers. In other words, is expected to be large at the beginning. As the learning algorithm iterates, the discriminator gets more accurate at distinguishing states. As discussed above, this requires the ancilla qubit to depart from the totally mixed state and to decrease. This is when the learning signal for the generator is stronger, allowing the generated state to get closer to the target. As the two become less and less distinguishable with enough iterations, the discriminator needs to increase correlations between ancilla’s bases and relevant factors in the main register. That is, we expect to observe an increase of entanglement between the two registers, hence an increase in . The performance of the discriminator would then saturate as converges to its upper bound of . We propose to detect this convergence and use it as a stopping criterion. In the Section III we analyze the behavior of BEE via numerical simulations.

ii.4 Long-term implementation on universal quantum computers

Let us briefly recall the adversarial circuit learning task. We have two circuits, the generator and the discriminator, and a target state. The target state is prepared with probability , while the generated state is prepared with probability . The discriminator has to successfully distinguish each state or, in other words, he must find the measurement that minimizes the probability of labelling error.

As described earlier, Helstrom Helstrom (1969) observed that the optimal POVM that distinguishes two states has the following particular form; Let and be the POVM elements attaining the minimum in , then both elements are diagonal in a basis that also diagonalizes the Hermitian operator

(17)

As pointed out in Ref. Fuchs (1996), in this basis one can construct by specifying its diagonal elements according to the rule

(18)

where are the diagonal elements of . The operator is then obtained via the relationship . Hence we can construct the optimal measurement operator if we have access to the operator , and provided that we can diagonalize it.

Using the above insight, with and , we can observe that and . Under the assumption of equal prior probabilities of , the above is minimized for a maximum overlap of the two states. Since the prior probabilities are hyperparameters, we can set them to and use the swap test Buhrman et al. (2001) to compute the overlap. This procedure effectively implements an optimal discriminator and provides a strong learning signal to the generator.

Note, however, that the swap test bears several disadvantages. In order to perform the swap test, we need to access both and simultaneously. This also requires the use of two registers for a total qubits, which is significantly more than the qubits required in the near-term approach. Finally, the swap test requires the ability to perform non-trivial controlled gates and error correction.

A potential solution is to find an efficient low-depth circuit implementing the swap test. In Ref. Cincio et al. (2018) the authors implemented such via a variationally trained circuit. As pointed out in their work, this requires (a) an order of training examples for states of qubits, and (b) each training example be given by the actual overlap between two states, requiring a circuit which gives the answer to the problem we are trying to solve. We hence hold the belief that this approach is not suitable for our task. However, other approaches for finding a low-depth circuit for computing the swap test might well be possible.

One could alternatively consider the possibility of implementing a discriminator via distance measurements based on random projections, i.e., Johnson-Lindenstrauss transformations Dasgupta and Gupta (2003). This would require a reduced amount of resources and could be adapted for the adversarial learning task. As an example, we could apply a quantum channel to coherently reduce the dimensionality of the input state and then apply the state discrimination procedure in the lower dimensional space. However, in Ref. Harrow et al. (2011) the authors proved that such an operation cannot be performed by a quantum channel. One way to think about this is that the Johnson-Lindenstrauss transformation is a projection onto a small random subspace and therefore a projective measurement. As the subspace is exponentially smaller than the initial Hilbert space, the probability that this projection preserves the distances is very small.

Iii Results

We show that adversarial quantum circuit learning can be used to approximate entangled target states. In realistic scenarios, the target state would come from an external channel and would be loaded in the quantum computer’s register with no significant overhead. For the simulations we mock this scenario using circuits to prepare the target states. That is, we have where is an unknown circuit. We setup a generator circuit and a discriminator circuit , and the composition of these circuits is shown in Fig. 1, left panel. We shall stress that neither the generator nor the discriminator are allowed to ‘see’ the inner workings of at any time.

We are interested in studying the performance of the algorithm as we change the complexity of the circuits. The complexity of our circuits is determined by the number of layers of gates. We denote such a number as so that, for example, a generator circuit made of layers has complexity . Figure 1, right panel, shows the layer that we used for our circuits. It has general two-qubit gates where is the number of qubits. Note that a general two-qubit gate can be efficiently implemented with three CNOT gates and parametrized single-qubit rotations as shown in Ref. Shende et al. (2004).

           

Figure 1: Left panel: Representation of the adversarial quantum circuits. In our simulations the target state is prepared by a random circuit . The generator circuit learns to approximate the target. The discriminator circuit takes in input unknown -qubit states and learns to label them as ‘target’ or ‘generated’. This is done via the binary outcome of a projective measurement on a single ancilla qubit. Neither the generator nor the discriminator are allowed to ‘see’ the inner workings of at any time. Hence, the learning signal for the generator comes solely from the probability of error of the discriminator. Right panel: Layout used as a building block for all the circuits. For an -qubit circuit the layer has general two-qubit unitaries. General two-qubit unitaries of this kind can be efficiently implemented with three CNOT gates and parametrized single-qubit rotations as in Ref. Shende et al. (2004).

All parameters were initialized uniformly at random in . We chose so that the discriminator is given target and generated states with equal probability. All expected values required to compute gradients were estimated from measurements on the ancilla qubit. Unless stated otherwise, optimization was performed using iRprop. We used an initial step size , a minimum allowed step size , and a maximum allowed step size .

Figure 2

shows learning curves for simulations on four qubits. The green downward triangles represent mean and one standard deviation of the trace distance between target and generated state, computed on

repetitions. In the left panel, the number of layers are and . We observe that the complexity of the discriminator is not sufficient to provide a learning signal for the generator, and the final approximation is indeed not satisfactory. In the central panel, and . The generator is less complex than the target state, but it manages to produce a meaningful approximation in average. In the right panel, . The complexity of all circuits is optimal, and the generator learns an indistinguishable approximation of the target state.

The trace distance reported here could have been approximately computed using the swap test. However, since we assumed a near-term implementation, we cannot reliably execute the swap test. In Section II.3 we designed an efficient heuristic to keep track of learning and suggested to use it as a stopping criterion. To test the idea, we performed additional measurements on the ancilla qubit for each observable , , and . The outcomes were used to estimate the BEE using the SDI method. In Fig. 2 the blue upwards triangles represent mean and one standard deviation of the BEE, computed on repetitions. The left panel shows that when the discriminator circuit is too shallow, BEE oscillates with no clear pattern. The central and right panels show that, when using a favorable setting, the initial BEE drops significantly towards zero. This is when the generator begins to learn the target state. Note that, as the algorithm iterates, the ancilla qubit tends towards the maximally mixed state where (gray horizontal line). In this regime, the discriminator predicts the labels with probability equal to the prior .

Detecting convergence of BEE can be used as a stopping criterion for training. For example, the central and right panels in Fig. 2 show that BEE converged after approximately iterations. Stopping the simulation at that point we obtained excellent results in average. We now show tomographic reconstructions for two cases. First, we examine the case where the generator is under-parametrized. Figure 3, right panel, shows the absolute value of the entries of the density matrix for a four-qubit target state. The randomly initialized generator produced the state shown in the left panel which is at trace distance from the target. By stopping the adversarial algorithm after iterations, we generated the state shown in the central panel whose trace distance is . The generator managed to capture the main mode of the density matrix, that is, the sharp peak visible on the right. Second, we examine the case where the generator is sufficiently parametrized. Figure 4, right panel, shows the absolute value of the entries of the density matrix for the target state. The generator initially produced the state shown in the left panel which is at trace distance from the target. By stopping the adversarial algorithm after iterations, we generated the state shown in the central panel whose trace distance is . Visually, the target and final states are indistinguishable.

But how do the complexities of generator and discriminator affect the outcome? To verify this, we run the adversarial learning on six-qubit target states of layers, and varied the number of layers of generator and discriminator. After training iterations, we computed the mean trace distance across five repetitions. As illustrated in Fig. 5, increasing the complexity always resulted in a better approximation to the target state.

In our final test, we compared optimization algorithms on six-qubit target states. We ran GDM and iRprop for iterations. Figure 6 shows mean and one standard deviation across five repetitions. iRprop (blue downward triangles) outperformed GDM both with step size (green circles) and (red upward triangles). This is because despite the small magnitude of the gradients when considering targets of six qubits, we were still able to estimate their sign and take relevant steps in the correct direction. This is a significant advantage of resilient backpropagation algorithms.

Figure 2: Learning curves and stopping criterion for simulations on four-qubit target states. The performance is shown in terms of the trace distance between the target and generated states (green downward triangles), with zero indicating optimal approximation. All lines represent mean and one standard deviation computed on repetitions. Titles indicate the complexities of target , generator , and discriminator circuits (see main text for details). In the left panel, the discriminator is too simple to provide a learning signal for the generator. In the central panel, the generator is simple, but it can still produce a meaningful approximation of the target state. In the right panel, all circuits are complex enough to learn an indistinguishable approximation of the target state. The trace distance cannot be computed in near-term implementations. The bipartite entanglement entropy (BEE) of the ancilla qubit (blue upward triangles) can be used as an efficient proxy to assess the learning progress. After the initial drop in BEE, the learning signal for the generator is strong and the trace distance decreases sharply. As learning progresses, the ancilla qubit gets closer to the mixed state where (gray horizontal line). Detecting the convergence of BEE can be used as a stopping criterion for training.
Figure 3: Absolute value of tomographic reconstructions for a four-qubit target state. The target state is prepared by a random circuit of layers (see main text for details), and the absolute value of its density matrix is shown in the right panel. The two players of the adversarial game are a generator with and a discriminator with . The generator is too simple to learn the target exactly, but can still find a reasonable approximation. The initial generated state shown in the left panel is at trace distance from the target. Using our heuristic we stopped the adversarial learning at iteration where BEE converged. The final state, shown in the central panel, is at trace distance from the target. The generator managed to capture the main mode of the density matrix, that is, the sharp peak visible on the right.
Figure 4: Absolute values of tomographic reconstructions for a four-qubit target state. The setting is similar to that of Fig. 3, but this time the generator is a circuit with layers, just like the random circuit that prepared the target. The randomly initialized generator produces the state shown in the left panel, which is at trace distance from the target. Using our heuristic we stopped the adversarial learning at iteration where BEE converged. The final state, shown in the central panel, is at trace distance from the target. Visually, the target and final states are indistinguishable.
Figure 5: Quality of the approximation against complexity of circuits for simulations on six-qubit target states. The heat-map shows mean trace distance of five repetitions of adversarial learning computed at iteration . All standard deviations were (not shown). The targets were produced by random circuits of layers. Increasing the complexity of discriminator and generator resulted in better approximations to the target state in all cases.
Figure 6: Learning curves for different optimizers in simulations on six-qubit target states. The lines represent mean and one standard deviation of the trace distance computed on five repetitions. All circuits had the same number of layers, . iRprop resulted in better performance than gradient descent with momentum (GDM) when using two different step sizes. Increasing the step size further in GDM resulted in unstable performance (not shown).

We now briefly discuss the advantages of our method compared to other quantum machine learning approaches for state approximation. These approaches require quantum resources that go far beyond those currently available. For example, the quantum principal component analysis Lloyd et al. (2014) requires universal fault-tolerant hardware in order to implement the necessary SWAP operations. As another example, the quantum Boltzmann machine Amin et al. (2018); Kieferová and Wiebe (2017) requires the preparation of highly non-trivial thermal states. Moreover, those approaches provide limited control over the level of approximation. In contrast, the adversarial method proposed here is an heuristic scheme with fine control over the level of approximation; this is done by fixing the depth of the circuit, thereby limiting the complexity of the optimization problem. In this way, our method is expected to scale to large input dimensions, although this may require introducing an approximation error. As shown in Figs. 2 and 5, the error is an increasing function of the target’s complexity, and a decreasing function of the generator’s complexity. This feature allows the adversarial approach to be implemented with any available circuit depth on any NISQ device. A circuit-based demonstration of adversarial learning was given in Ref. Hu et al. (2019) after our work. Clearly, a thorough numerical benchmark is needed to compare the scalability of different methods, which we leave for future work.

Iv Discussion and Conclusions

In this work we proposed an adversarial algorithm and applied it to learn quantum circuits that can approximately generate and discriminate pure quantum states. We used information theoretic arguments to formalize the problem as a minimax game. The discriminator circuit maximizes the value function in order to better distinguish between the target and generated states. This can be thought of as learning to perform the Helstrom measurement Helstrom (1969). In turn, the generator circuit minimizes the value function in order to deceive the discriminator. This can be thought of as minimizing the trace distance of the generated state to the target state. The desired outcome of this game is to obtain the best approximation to the target state for a given generator circuit layout.

We demonstrated how to perform such a minimax game in near-term quantum devices, i.e., NISQ computers Preskill (2018), and we discussed long-term implementations on universal quantum computers. The near-term implementation has the advantage that it requires less qubits and avoids the swap test. The long-term implementation has the advantage that it can make use of the actual Helstrom measurement, with the potential of speeding up the learning process.

Previous work on quantum circuit learning raised the concern of barren plateaus in the error surface McClean et al. (2018). We showed numerically that a class of optimizers called resilient backpropagation Riedmiller and Braun (1993) achieves high performance for the problem at hand, while gradient descent with momentum performs relatively poorly. These resilient optimizers require only the temporal behaviour of the sign of the gradient, and not the magnitude, to perform an update step. In our simulations of up to seven qubits we were able to correctly ascertain the sign of the gradient frequently enough for the optimizer to converge to a good solution. For regions of the error surface where the sign of the gradient cannot be reliably determined, we suggested an alternative optimization method that could traverse such regions. We will explore this idea in future work.

In general it is not clear how to assess the model quality in generative adversarial learning, nor how to find a stopping criterion for the optimization algorithm. For example, in the classical setting of computer vision, it is often the case that generated samples are visually evaluated by humans, i.e., the Turing test, or by a proxy artificial neural network, e.g., the Inception Score 

Salimans et al. (2016). The quantum setting does not allow for these approaches in a straightforward manner. We therefore designed an efficient heuristic based on an estimate of the entanglement entropy of a single qubit, and numerically showed that convergence of this quantity indicates saturation of the adversarial algorithm. We therefore propose this approach as a stopping criterion for the optimization process. We conjecture that similar ideas could be used for regularization in quantum circuit learning for classification and regression.

We tested the quality of the approximations as a function of the complexity of the generator and discriminator circuits for simulations of up to seven qubits. Our results indicate that investing more resources in the generator and discriminator circuits leads to noticeable improvements. Indeed, an interesting avenue for future work is the study of circuit layouts, i.e., type of gates, and parameter initializations. If prior information about the target state is available, or can be efficiently extracted, we can encode it by using a suitable layout for the generator circuit. For example, in Ref. Liu and Wang (2018) the authors use the Chow-Liu tree to displace CNOT gates such that they capture most of the mutual information among variables. Similarly, structured layouts could be used for the discriminator circuit such as hierarchical Grant et al. (2018) and universal topologies Chen et al. (2018). These choices could reduce the number of parameters to learn and simplify the error surface.

An adversarial learning framework capable of handling mixed states has been recently put forward Lloyd et al. (2014); Dallaire-Demers and Killoran (2018), but no implementation compatible with near-term computers was provided. In comparison, our framework works well for approximating pure target states and can find application in quantum state tomography on NISQ computers.

In this work we relied on the variational definition of Bayesian probability of error, which assumes the availability of a single copy of the quantum state to discriminate. By assuming the availability of multiple copies, which is in practice the case, one can derive more general adversarial games based on complex information theoretical quantities. These could be variational definitions of the quantum Chernoff bound Audenaert et al. (2007), the Umegaki relative information, and other measures of distinguishability Fuchs (1996).

V Acknowledgements

The authors want to thank Ashley Montanaro for helpful discussions on random projections and for pointing out reference Harrow et al. (2011). M.B. is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) and by Cambridge Quantum Computing Limited (CQCL). E.G. is supported by ESPRC [EP/P510270/1]. L.W. is supported by the Royal Society. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. S.S. is supported by the Royal Society, EPSRC, the National Natural Science Foundation of China, and the grant ARO-MURI W911NF17-1-0304 (US DOD, UK MOD and UK EPSRC under the Multidisciplinary University Research Initiative).

References

  • The Caltech Archives (1988) The Caltech Archives, “Richard Feynman’s blackboard at time of his death,” http://archives-dc.library.caltech.edu/islandora/object/ct1%3A483 (1988), Accessed on 2018-05-01.
  • Lloyd et al. (2014) Seth Lloyd, Masoud Mohseni,  and Patrick Rebentrost, “Quantum principal component analysis,” Nature Physics 10, 631 (2014).
  • Amin et al. (2018) Mohammad H Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy,  and Roger Melko, “Quantum Boltzmann machine,” Physical Review X 8, 021050 (2018).
  • Kieferová and Wiebe (2017) Mária Kieferová and Nathan Wiebe, “Tomography and generative training with quantum Boltzmann machines,” Physical Review A 96, 062327 (2017).
  • Aaronson (2007) Scott Aaronson, “The learnability of quantum states,” in Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, Vol. 463 (The Royal Society, 2007) pp. 3089–3114.
  • Rocchetto et al. (2017) Andrea Rocchetto, Scott Aaronson, Simone Severini, Gonzalo Carvacho, Davide Poderini, Iris Agresti, Marco Bentivegna,  and Fabio Sciarrino, “Experimental learning of quantum states,” arXiv preprint arXiv:1712.00127  (2017).
  • Cheng et al. (2018) Song Cheng, Jing Chen,  and Lei Wang, “Information perspective to probabilistic modeling: Boltzmann machines versus born machines,” Entropy 20, 583 (2018).
  • Benedetti et al. (2018) Marcello Benedetti, Delfina Garcia-Pintos, Oscar Perdomo, Vicente Leyton-Ortega, Yunseong Nam,  and Alejandro Perdomo-Ortiz, “A generative modeling approach for benchmarking and training shallow quantum circuits,” arXiv preprint arXiv:1801.07686  (2018).
  • Perdomo-Ortiz et al. (2018) Alejandro Perdomo-Ortiz, Marcello Benedetti, John Realpe-Gómez,  and Rupak Biswas, “Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers,” Quantum Science and Technology 3, 030502 (2018).
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville,  and Yoshua Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27 (2014) pp. 2672–2680.
  • Dallaire-Demers and Killoran (2018) Pierre-Luc Dallaire-Demers and Nathan Killoran, “Quantum generative adversarial networks,” Physical Review A 98, 012324 (2018).
  • Lloyd and Weedbrook (2018) Seth Lloyd and Christian Weedbrook, “Quantum generative adversarial learning,” Physical Review Letters 121, 040502 (2018).
  • Helstrom (1969) Carl W Helstrom, “Quantum detection and estimation theory,” Journal of Statistical Physics 1, 231–252 (1969).
  • Peruzzo et al. (2014)

    Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alán Aspuru-Guzik,  and Jeremy L O’brien, “A variational eigenvalue solver on a photonic quantum processor,” Nature communications 

    5, 4213 (2014).
  • Farhi et al. (2014) Edward Farhi, Jeffrey Goldstone,  and Sam Gutmann, “A quantum approximate optimization algorithm,” arXiv preprint arXiv:1411.4028  (2014).
  • Verdon et al. (2017) Guillaume Verdon, Michael Broughton,  and Jacob Biamonte, “A quantum algorithm to train neural networks using low-depth circuits,” arXiv preprint arXiv:1712.05304  (2017).
  • Riedmiller and Braun (1993) Martin Riedmiller and Heinrich Braun, “A direct adaptive method for faster backpropagation learning: The rprop algorithm,” in Neural Networks, 1993., IEEE International Conference on (IEEE, 1993) pp. 586–591.
  • Preskill (2018) John Preskill, “Quantum Computing in the NISQ era and beyond,” Quantum 2, 79 (2018).
  • Fuchs (1996) Christopher A Fuchs, “Distinguishability and accessible information in quantum theory,” arXiv preprint quant-ph/9601020  (1996).
  • Nielsen and Chuang (2011) Michael A. Nielsen and Isaac L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition, 10th ed. (Cambridge University Press, New York, NY, USA, 2011).
  • Audenaert et al. (2007) Koenraad MR Audenaert, John Calsamiglia, Ramón Muñoz-Tapia, Emilio Bagan, Ll Masanes, Antonio Acin,  and Frank Verstraete, “Discriminating states: The quantum Chernoff bound,” Physical review letters 98, 160501 (2007).
  • Neumark (1940) M Neumark, “Spectral functions of a symmetric operator,” Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya 4, 277–318 (1940).
  • Mitarai et al. (2018) Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa,  and Keisuke Fujii, “Quantum circuit learning,” Physical Review A 98, 032309 (2018).
  • Liu and Wang (2018) Jin-Guo Liu and Lei Wang, “Differentiable learning of quantum circuit born machines,” Physical Review A 98, 062324 (2018).
  • McClean et al. (2018) Jarrod R McClean, Sergio Boixo, Vadim N Smelyanskiy, Ryan Babbush,  and Hartmut Neven, “Barren plateaus in quantum neural network training landscapes,” Nature communications 9, 4812 (2018).
  • Grant et al. (2018)

    Edward Grant, Marcello Benedetti, Shuxiang Cao, Andrew Hallam, Joshua Lockhart, Vid Stojevic, Andrew G Green,  and Simone Severini, “Hierarchical quantum classifiers,” npj Quantum Information 

    4, 65 (2018).
  • Chen et al. (2018) Hongxiang Chen, Leonard Wossnig, Simone Severini, Hartmut Neven,  and Masoud Mohseni, “Universal discriminative quantum neural networks,” arXiv preprint arXiv:1805.08654  (2018).
  • Igel and Hüsken (2000) Christian Igel and Michael Hüsken, “Improving the Rprop learning algorithm,” in Proceedings of the second international ICSC symposium on neural computation (NC 2000), Vol. 2000 (Citeseer, 2000) pp. 115–121.
  • Theis et al. (2016) L Theis, A van den Oord,  and M Bethge, “A note on the evaluation of generative models,” in International Conference on Learning Representations (ICLR 2016) (2016) pp. 1–10.
  • Salimans et al. (2016) Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen,  and Xi Chen, “Improved techniques for training gans,” in Advances in Neural Information Processing Systems 29 (2016) pp. 2234–2242.
  • Bennett et al. (1997) Charles H Bennett, Ethan Bernstein, Gilles Brassard,  and Umesh Vazirani, “Strengths and weaknesses of quantum computing,” SIAM journal on Computing 26, 1510–1523 (1997).
  • Schmied (2016) Roman Schmied, “Quantum state tomography of a single qubit: comparison of methods,” Journal of Modern Optics 63, 1744–1758 (2016).
  • Buhrman et al. (2001) Harry Buhrman, Richard Cleve, John Watrous,  and Ronald De Wolf, “Quantum fingerprinting,” Physical Review Letters 87, 167902 (2001).
  • Cincio et al. (2018) Lukasz Cincio, Yiğit Subaşı, Andrew T Sornborger,  and Patrick J Coles, “Learning the quantum algorithm for state overlap,” New Journal of Physics 20, 113022 (2018).
  • Dasgupta and Gupta (2003) Sanjoy Dasgupta and Anupam Gupta, “An elementary proof of a theorem of Johnson and Lindenstrauss,” Random Structures & Algorithms 22, 60–65 (2003).
  • Harrow et al. (2011) Aram W Harrow, Ashley Montanaro,  and Anthony J Short, “Limitations on quantum dimensionality reduction,” in International Colloquium on Automata, Languages, and Programming (Springer, 2011) pp. 86–97.
  • Shende et al. (2004) Vivek V Shende, Igor L Markov,  and Stephen S Bullock, “Minimal universal two-qubit controlled-not-based circuits,” Physical Review A 69, 062321 (2004).
  • Hu et al. (2019) Ling Hu, Shu-Hao Wu, Weizhou Cai, Yuwei Ma, Xianghao Mu, Yuan Xu, Haiyan Wang, Yipu Song, Dong-Ling Deng, Chang-Ling Zou, et al., “Quantum generative adversarial learning in a superconducting quantum circuit,” Science Advances 5, eaav2761 (2019).