Quantum-assisted associative adversarial network: Applying quantum annealing in deep learning

04/23/2019 ∙ by Max Wilson, et al. ∙ 0

We present an algorithm for learning a latent variable generative model via generative adversarial learning where the canonical uniform noise input is replaced by samples from a graphical model. This graphical model is learned by a Boltzmann machine which learns low-dimensional feature representation of data extracted by the discriminator. A quantum annealer, the D-Wave 2000Q, is used to sample from this model. This algorithm joins a growing family of algorithms that use a quantum annealing subroutine in deep learning, and provides a framework to test the advantages of quantum-assisted learning in GANs. Fully connected, symmetric bipartite and Chimera graph topologies are compared on a reduced stochastically binarized MNIST dataset, for both classical and quantum annealing sampling methods. The quantum-assisted associative adversarial network successfully learns a generative model of the MNIST dataset for all topologies, and is also applied to the LSUN dataset bedrooms class for the Chimera topology. Evaluated using the Fréchet inception distance and inception score, the quantum and classical versions of the algorithm are found to have equivalent performance for learning an implicit generative model of the MNIST dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The ability to efficiently and accurately model a dataset, even without full knowledge of why a model is the way it is, is a valuable tool for understanding complex systems. ml, the field of data analysis algorithms that create models of data, is experiencing a renaissance due to the availability of data, increased computational resources and algorithm innovations, notably in deep neural networks

royalsoc2017ml ; silver2016mastering . Of particular interest are unsupervised algorithms that train generative models. These models are useful because they can be used to generate new examples representative of a dataset. A gan is an algorithm which trains a latent variable generative model with a range of applications including image or signal synthesis, classification and image resolution. The algorithm has been demonstrated in a range of architectures, now well over 300 types and applications, from the gan zoo isola2017image ; radford2015unsupervised ; ledig2016photo . Two problems in gan learning are non-convergence, oscillating and unstable parameters in the model, and mode collapse, where the generator only provides a small variety of possible samples. These problems have been addressed previously in existing work including energy based gans zhao2016energy and the Wasserstein gan arjovsky2017wasserstein ; gulrajani2017improved . Another proposed solution involves replacing the canonical uniform noise prior of a gan with a prior distribution modelling low-dimensional feature representation of the dataset. Using this informed prior may alleviate the learning task of the generative network, decrease mode-collapse and encourage convergence arici2016associative

. This feature distribution is a rich and low-dimensional representation of the dataset extracted by the discriminator in a gan. A generative probabilistic graphical model can learn this feature distribution. However, given the intractability of calculating the exact distribution of the model, classical techniques often use approximate methods for sampling from restricted topologies, such as contrastive divergence, to train and sample from these models. Quantum annealing, a quantum optimisation algorithm, has been shown to sample from a Boltzmann-like distribution on near-term hardware

benedetti2017quantum ; amin2016quantum , which can be used in the training of these types of models. In the future, quantum annealing may decrease the cost of this training by decreasing the computation time biamonte2017quantum , energy usage ciliberto2018quantum , or improve performance as quantum models kappen2018learning may better represent some datasets. Here, we demonstrate the qaaan algorithm, Figure  1, a hybrid quantum-assited gan in which a bm trains, using samples from a quantum annealer, a model of a low-dimensional feature distribution of the dataset as the prior to a generator. The model learned by the algorithm is a latent variable implicit generative model and an informed prior , where are latent variables and are data space variables. The prior will contain useful information about the features of the data distribution and this information will not need to be learned by the generator. Put another way, the prior will be a model of the feature distribution containing the latent variable modes of the dataset.

Contributions

The core contribution of this work is the development of a scalable quantum-assisted gan which trains an implicit latent variable generative model. This algorithm fulfills the criteria for inclusion of near-term quantum annealing hardware in deep learning frameworks that can learn continuous variable datasets: Resistant to noise, small number of variables, in a hybrid architecture. Additionally in this work we explore different topologies for the latent space model. The purpose of the work is to

  • compare different topologies to appropriately choose a graphical model, restricted by the connectivity of the quantum hardware, to integrate with the deep learning framework,

  • design a framework for using sampling from a quantum annealer in generative adversarial networks, which may lead to architectures that encourage convergence and decrease mode collapse.

Outline

First, there is a short section on the background of gans, quantum annealing and Boltzmann machines. In Section III an algorithm is developed to learn a latent variable generative model using samples from a quantum annealer to replace the canonical uniform noise input. We explore different models, specifically complete, symmetric bipartite and Chimera topologies, tested on a reduced stochastically binarized version of MNIST, for use in the latent space. In Section IV the results are detailed, including application of the qaaan and a classical version of the algorithm to the MNIST dataset. The architectures are evaluated using the Inception Score and the Frechét Inception Distance. The algorithm is also implemented on the LSUN bedrooms dataset using classical sampling methods, demonstrating the scalability.

Figure 1: The inputs to the generator network are samples from a Boltzmann distribution. A bm trains a model of the feature space in the generator network, indicated by the Learning. Samples from the quantum annealer, the D-Wave 2000Q, are used in the training process for the bm, and replace the canonical uniform noise input to the generator network. These discrete variables are reparametrised to continuous variables before being processed by transposed convolutional layers. Generated and real data are passed into the convolutional layers of the discriminator which extracts a low-dimensional representation of the data. The bm learns a model of this representation. An example flow of information through the network is highlighted in green. In the classical version of this algorithm, MCMC sampling is used to sample from the discrete latent space, otherwise the architectures are identical.

Ii Background

Generative Adversarial Networks

Implicit generative models are those which specify a stochastic procedure with which to generate data. In the case of a gan, the generative network maps latent variables z to images which are likely under the real data distribution, for example , is the function represented by a neural network, is the resulting image with , and

is typically the uniform distribution between 0 and 1,

. Training a gan can be formulated as a minimax game where the discriminator attempts to maximise the cross-entropy of a classifier that the generator is trying to minimise. The cost function of this minimax game is

(1)

is the expectation over the distribution of the dataset, is the expectation over the latent variable distribution and and are functions instantiated by a discriminative and generative neural network, respectively, and we are trying to find . The model learned is a latent variable generative model . The first term in Equation 1 is the

-probability of the discriminator predicting that the real data is genuine and the second the

-probability of it predicting that the generated data is fake. In practice, ml engineers will instead use a heuristic maximising the likelihood that the generator network produces data that trick the discriminator instead of minimising the probability that the discriminator label them as real. This has the effect of stronger gradients earlier in training

goodfellow2014generative . gans are lauded for many reasons: The algorithm is unsupervised; the adversarial training does not require direct replication of the real dataset resulting in samples that are sharp wang2017generative

; and it is possible to perform the weight updates through efficient backpropagation and stochastic gradient descent. There are also several known disadvantages. Primarily, the learned distribution is implicit. It is not straightforward to compute the distribution of the training set

mohamed2016learning unlike explicit, or prescribed, generative models which provide a parametric specification of the distribution specifying a -likelihood that some observed variable x is from that distribution. This means that simple gan implementations are limited to generation. Further, as outlined in the introduction, the training is prone to non-convergence barnett2018convergence , and mode collapse thanh2018catastrophic . This stability of gan training is an issue and there are many hacks to encourage convergence, discourage mode-collapse and increase sample diversity including using spherical input space white2016sampling , adding noise to the real and generated samples arjovsky2017wasserstein and minibatch discrimination salimans2016improved . We hypothesise that using an informed prior will decrease mode-collapse and encourage convergence.

Figure 2: Bedrooms from the LSUN dataset generated with an associative adversarial network, with a fully connected latent space sampled via MCMC sampling.

Boltzmann Machines & Quantum Annealing

(a) (b) (c)
Figure 3: (a) Complete (b) Chimera (c) symmetric bipartite graphical models. These graphical models are embedded into the hardware and the nodes in these graphs are not necessarily representative of the embeddings.

A bm is a energy-based graphical model composed of stochastic nodes, with weighted connections between and biases applied to the nodes. The energy of the network corresponds to the energy function applied to the state of the system. bms represent multimodal and intractable distributions le2008representational , and the internal representation of the bm, the weights and biases, can learn a generative model of a distribution ackley1987learning . A graph with cardinality describing a Boltzmann machine with model parameters over logical variables connected by edges has energy

(2)

where weight is assigned to the edge connecting variables and , bias is assigned to variable and possible states of the variables are corresponding to ‘off’ and ‘on’, respectively. We refer to this graph as the logical graph. The distribution of the states z is

(3)

with a parameter recognized by physicists as the inverse temperature in the function defining the Boltzmann distribution. bm training requires sampling from the distribution represented by the energy function. For fully-connected variants it is an intractable problem to calculate the probability of the state occurring exactly koller2007graphical and is computationally expensive to approximate. Exact inference of complete graph bms is generally intractable and approximate methods including Gibbs sampling are slow. Generally, applications will use deep stacked rbm architectures, which can be efficiently trained with approximate methods. An rbm is a symmetric bipartite bm. It is possible to efficiently learn the distribution of some input data spaces through approximate methods, notably contrastive divergence carreira2005contrastive . Stacked rbms form a dbn and can be greedily trained to learn the generative model of datasets with higher-level features with applications in a wide range of fields from image recognition to finance deng2014deep . Training these types of models requires sampling from the Boltzmann distribution. qa has been proposed as a method for sampling from complex Boltzmann-like distributions. It is an optimisation algorithm exploiting quantum phenomena to find the ground state of a cost function. qa has been demonstrated for a range of optimisation problems biswas2017nasa , however, defining and detecting speedup, especially in small and noisy hardware implementations is challenging ronnow2014defining ; katzgraber2014glassy . qa has been proposed and in some cases demonstrated as a sampling subroutine in ml algorithms: A quantum Boltzmann machine amin2016quantum ; training a qvae khoshaman2018quantum ; a quantum-assisted Helmholtz machine benedetti2018quantumqahm ; deep belief nets of stacked rbms adachi2015application . In order to achieve this, the framework outlined in Equation 2 can be mapped to an Ising model for a quantum system represented by the Hamiltonian

(4)

where now variables have been replaced by the Pauli- operators,

, which return eigenvalues in the set

when applied to the state of variable , physically corresponding to spin-up and spin-down, respectively. Parameters and are replaced with the Ising model parameters and which are conceptually equivalent. In the hardware, these parameters are referred to as the flux bias and the coupling strength, respectively. The full Hamiltonian describing the dynamics of the D-Wave 2000Q, equivalent to the time-dependent transverse field Ising model, is

(5)

The transverse field term is

(6)

are the Pauli- operators in the Hilbert space . and are monotonic functions defined by the total annealing time biswas2017nasa . Generally, at the start of an anneal, and . decreases and increases monotonically with until, at the end of the anneal, and . When

, the Hamiltonian contains terms that are not possible in the classical Ising model, that is those that are normalised linear combinations of classical states. This Hamiltonian was embedded in the D-Wave 2000Q, a system with 2048 qubits, each with degree 6. Embedding is the process of mapping the logical graph, represented by Equation 

4, to hardware. If the logical graph has degree or a structure that is not native to the hardware, the logical graph can still be embedded in the hardware via a 1-many mapping, that means one variable is represented by more than one qubit. These qubits are arranged in a ‘chain’(this term is used even when the set of qubits forms a small tree). A chain is formed by setting the coupling strength between these qubits to a strong value to encourage them to take a single value by the end, but not so strong that it overwhelms the and in the original problem Hamiltonian or has a detrimental effect on the dynamics. There is a sweet spot for this value. In our case, we used the maximum value available on the D-Wave 2000Q, namely . At the end of the anneal, to determine the value of a logical variable expressed as a qubit chain in the hardware a majority vote is performed: The logical variable takes the value corresponding to the state of the majority of qubits. If there is no majority a coin is flipped to determine the value of the logical variable. Each state found after an anneal comes from a distribution, though it is not clear what distribution the quantum annealer is sampling from. For example, in problem instances with a well defined freeze-out region, the distribution is hypothesised to follow a quantum Boltzmann distribution up to the freeze-out region where the dynamics of the system slow down and diverge amin2015searching . If the freeze-out region is narrow then the distribution can be modelled as the classical distribution of problem Hamiltonian, , at , at a higher unknown effective temperature,

(7)

where and we have performed matrix exponentiation. In the case where the Hamiltonian contains no off-diagonal terms and Equation 7 is equivalent to the classical Boltzmann distribution, Equation 3, at some temperature. is a dimensionless parameter which depends on the temperature of the system, the energy scale of the superconducting flux qubits and open system quantum dynamics. However, it is an open question as to when the freeze-out hypothesis holds.

1:for

 epochs 

do
2:     Sample m Boltzmann distribution samples from using quantum annealer
3:     Sample n examples and map to logical space
4:     Sample n examples
5:     Sample n examples and map to logical space
6:     Sample n training data examples
7:     Generate
8:     
9:     Generate
10:     Update weights of bm via SGD with and
11:     Generate
12:     
13:return Network
Algorithm I Quantum-assisted associative adversarial network training.
Figure 4: QAAAN training algorithm. represents the distribution given by the quantum annealer from sampling, therefore

represents sampling a set of vectors

from distribution . Steps 3 - 5 are indicative of the real-world implementation of these devices. In order to reduce sampling time we sampled from the device once and used this set for different tasks: for generating samples to train the discriminator, for training the bm and for generating samples to train the generator. Further details on mapping to the logical space for samples from the quantum annealer can be found in Section III. x is the MNIST dataset. Steps 8 and 12 are typical of gan implementation, and are the action of the generator discriminator network, respectively.

Other implementations of training graphical models have accounted for this instance dependent effective temperature benedetti2016estimation , in this work to get around the problem of using the unknown effective temperature for training a probabilistic graphical model, we use a gray-box model approach proposed in benedetti2017quantum . In this approach, full knowledge of the effective parameters, dependent on

, are not needed to perform the weight updates as long as the projection of the gradient is positive in the direction of the true gradient. The gray-box approach ties the model generated to the specific device used to train the model, though is robust to noise and is not required to estimate

raymond2016global , for the purposes of Equations 10 and 11. We find that under this approach performance remains good enough for deep learning applications. Though we do not have full knowledge of the distribution the quantum annealer samples from, we have modelled it as a classical Boltzmann distribution at an unknown temperature. This allows us to train models without the having to estimate the temperature of the system, providing a simple approach to integrating probabilistic graphical models into deep learning frameworks.

Iii Quantum-assisted associative adversarial network

In this section, the qaaan algorithm is outlined, including a novel way to learn the feature distribution generated by the discriminator network via a bm using sampling from a quantum annealer. The qaaan architecture is similar to the classical Associative Adversarial Network proposed in Ref arici2016associative , as such the minimax game played by the qaaan is

(8)

where the aim is now to find , with equivalent terms to Equation 1 plus an additional term to describe the optimisation of the model , Equation 7. This term conceptually represents the probability that samples generated by the model are from the feature distribution . is the feature distribution extracted from the interim layer of the discriminator. This distribution is assumed to be Boltzmann, a common technique for modelling a complex distribution. The algorithm used for training , a probabilistic graphical model, is a bm. Sampling from the quantum annealer, the D-Wave 2000Q, replaces a classical sampling subroutine in the bm. is used in the latent space of the generator, Figure 1, and samples from this model, also generated by the quantum annealer, replace the canonical uniform noise input to the generator network. Samples from are restricted to discrete values, as the measured values of qubits are . These discrete variables are reparametrised to continuous variables before being processed by the layers of the generator network, producing ‘generated’ data. Generated and real data are then passed into the layers of the discriminator which extracts the low-dimensional feature distribution

. This is akin to a variational autoencoder, where an approximate posterior maps the evidence distribution to latent variables which capture features of the distribution

doersch2016tutorial . The algorithm for training the complete network is detailed in Algorithm I. Below, we outline the details of the bm training in the latent space, reparametrisation of discrete variables, and the networks used in this investigation. Additionally, we detail an experiment to distinguish the performance of three different topologies of probabilistic graphical models to be used in the latent space.

Latent space

As in Figure 1, samples from a intermediate layer of the discriminator network are used to train a model for the latent space of the generator network. Here, a bm trains this model. The cost function of this bm is the quantum relative entropy

(9)

equivalent to the classical Kullback-Leibler divergence when all off-diagonal elements of

and are zero. This metric measures the divergence of distribution from where

is the target feature distribution of features extracted by the discriminator network and

is the model trained by the bm, from Equation 8. Though the distributions used here are modelled classically, this framework can be extended to quantum models using the quantum relative entropy. Given this it can be shown that the updates to the weights and biases of the model are

(10)
(11)

is the learning rate, is an unknown parameter, and is the expectation value of in distribution . are the logical variables of the graphical model and the expectation values are estimated by averaging 1000 samples from the quantum annealer. The quantum relative entropy is minimised by stochastic gradient descent.

Topologies

We explored three different topologies of probabilistic graphical models, complete, symmetric bipartite and Chimera, for the latent space. Their performance on learning a model of a reduced stochastically binarized version of MNIST was compared, in both classical sampling, Figure 9, and sampling via quantum annealing, Figure 8, cases. The complete topology is self-explanatory, Figure (a)a, restricted refers to a symmetric bipartite graph, Figure (c)c, and the sparse is the graph native to the D-Wave 2000Q, or Chimera graph, where the connectivity of the model is determined by the available connections on the hardware, Figure (b)b. The models were trained by minimising the quantum relative entropy, Equation 9, and evaluated with the ,

(12)

The algorithm did not include temperature estimation, or methods to adjust intra-chain coupling strengths for the embedding, as in benedetti2016estimation and benedetti2017quantum , respectively. The method used here makes a comparison between the different topologies, though for best performance one would want to account for the embedding and adjust algorithm parameters, such as the learning rate, to each topology. In addition to these requirements, there are several non-functioning, ‘dead’, qubits and couplers in the hardware. These qubits or couplers were removed in all embeddings, which had a negligible effect on the final performance. The complete topology embedding was found using a heuristic embedder. A better choice would be a deterministic embedder, resulting in shorter chain lengths, though when adjusting for the dead qubits the symmetries are broken and the embedded graph chain length increases to be comparable to that returned by the heuristic embedder. The restricted topology was implemented using the method detailed by Adachi and Henderson adachi2015application . The Chimera topology was implemented on a 2x2 grid of unit cells, avoiding dead qubits. Learning was run over 5 different embeddings for each topology and the results averaged. For topologies requiring chains of qubits, the couplers in the chains were set to -1.

Figure 5: Left to right: 28x28 continuous, 6x6 continuous, 6x6 stochastically binarized example from the MNIST dataset.

Reparametrisation

Figure 6:

The probability density function,

, for different values of . In this investigation was used, to distinguish strongly from the uniform noise case.

Samples from the latent space come from a discrete space. These variables are reparametrised to a continuous space, using standard techniques. There are many potential choices for reparametrisation functions and a simple example case is outlined below. We chose a probability density function which rises exponentially and can be scaled by parameter :

(13)

The cumulative distribution function of this probability density function is

and

(14)

Discrete samples can be reparametrised by sampling from and inputting into Equation 14. The value of was set to .

Networks

The generator network consists of dense and transpose convolutional, stride 2 kernel size 4, layers with batch normalisation and ReLU activations. The output layer has a tanh activation. These components are standard deep learning techniques found in textbooks, for example

Goodfellow-et-al-2016 . The discriminator network consists of dense, convolutional layers, stride 2 kernel size 4, LeakyReLU activations. The dense layer corresponding to the feature distribution was chosen to have tanh activations in order that outputs could map to the bm. The hidden layer representing was the fourth layer of the discriminator network with 100 nodes. When sampling the training data for the bm from the discriminator, the variables given values from the set {} as in the Ising model, dependent on the activation of the node being greater or less than the threshold, set at zero, respectively. The networks were trained with an Adam optimiser with learning rate 0.0002 and the labels were smoothed with noise. For the sparse graph latent space used in learning the MNIST dataset in Section IV, the bm was embedded in the D-Wave hardware using a heuristic embedder. As there is a 1-1 mapping for the sparse graph it was expressed in hardware using 100 qubits. An annealing schedule of 1 and a learning rate of 0.0002 were used. The classical architecture that was compared with the qaaan was identical other than replacing sampling via quantum annealing with MCMC sampling techniques.

Iv Results & Discussion

For this work we performed several experiments. First, we compared three topologies of graphical models, trained using both classical and quantum annealing sampling methods. They were evaluated for performance by measuring the over the course of the learning a reduced stochastically binarzied version of the MNIST dataset, Figure 5. Second, the qaaan and the classical associative adversarial network described in Section III were both used to generate new examples of the MNIST dataset. Their performance was evaluated used the inception score and the fid. Finally, the classical associative adversarial network was used to generate new examples of the LSUN bedrooms dataset. In the experiment comparing topologies, as expected, the bm trains a better model faster with higher connectivity, Figure 9. When trained via sampling with the quantum annealer the picture is less intuitive, Figure 8. All topologies learned a model to the same accuracy, at similar rates. This indicates that there is a noise floor preventing the learning of a better model in the more complex graphical topologies. For the purposes of this investigation the performance of the sparse graph was demonstrated to be enough to learn an informed prior for use in the qaaan algorithm. Second, for the classical associative adversarial network, all topologies were implemented, and the quantum-assisted algorithm was implemented with a sparse topology latent space. The generated images for sparse topology latent spaces are shown for both classical and quantum versions in Figures (a)a and (b)b. We evaluated classical and quantum-assisted versions of the associative adversarial network with sparse latent spaces via two metrics, the inception score and the fid. Both metrics required an inception network, a network trained to classify images from the MNIST dataset, which was trained to an accuracy of . The Inception Score, Equation 15, attempts to quantify realism of images generated by a model. For a given image, should be dominated by one value of , indicating a high probability that an image is representative of a class. Secondly, over the whole set there should be a uniform distribution of classes, indicating diversity of the distribution. This is expressed

(15)

The first criterion is satisfied by requiring that image-wise class distributions should have low entropy. The second criterion implies that the entropy of the overall distribution should be high. The method is to calculate the KL distance between these two distributions: A high value indicates that both the is distributed over one class and is distributed over many classes. When averaged over all samples this score gives a good indication of the performance of the network. The inception score of the classical and quantum-assisted versions were and , respectively. The fid measures the similarity between features extracted by an inception network from the dataset and the generated data . The distribution of the features are modelled as a mutlivariate Gaussian. Lower FID values mean the features extracted from the generated images are closer those for the real images. In Equation 16, are the means of the activations of an interim layer of the inception network and are the covariance matrices of these activations. The classical and quantum-assisted algorithms scored and , respectively.

(16)

The classical implementation was also used to generate images mimicking the LSUN bedrooms dataset, Figure 2. This final experiment was only performed as a demonstration of scalability, and no metrics were used to evaluate performance.

(a)
(b)
Figure 7: Example MNIST characters generated by (a) classical and (b) quantum-assisted associative adversarial network architectures, with sparse topology latent spaces.

Discussion

Figure 8:

Comparison of the convergence of different graphical topologies trained using samples from a quantum annealers on a reduced stochastically binarized MNIST dataset. The learning rate used was 0.03. This learning rate produced the fastest learning with no loss in performance of the final model. The learning was run 5 times over different embeddings and the results averaged. The error bars describe the variance over these curves.

Figure 9: Comparison of different graphical topologies trained using MCMC sampling on a reduced stochastically binarized MNIST dataset. The learning rate used was 0.001. This learning rate was chosen such that the training was stable for each topology, we found that the error diverged for certain topologies at other learning rates. The learning was run 5 times and the results averaged. The error bars decribe the variance over these curves.

Though it is trivial to demonstrate a correlation between the connectivity of a graphical model and the quality of the learned model, Figure 9, it is not immediately clear that the benefits of increasing the complexity of the latent space can be detected easily in deep learning frameworks, such as the quantum-assisted Helmholtz machine benedetti2018quantumqahm and those looking to exploit quantum models khoshaman2018quantum . The effect of the complexity of the latent space model on the quality of the final latent variable generative model was not apparent in our investigations. Deep learning frameworks looking to exploit quantum hardware supported training in the latent spaces need to truly benefit from this application, and not iron out any potential gains with backpropagation. For example, if exploiting a quantum model gives improved performance on some small test problem, it is an open question as to whether this improvement will be detected when integrated into a deep learning framework, such as the architecture presented here. Here, given the nature of the demonstration and a desire to avoid chaining we use a sparse connectivity model. Avoiding chaining allows for larger models to be embedded into near-term quantum hardware. Given the scaling of qubits to logical variables for a complete logical graph choi2011minor

, future applications of sampling via quantum annealing will likely exploit restricted graphical models. Though the size of near-term quantum annealers has followed Moore’s law trajectory, doubling in size every two years, it is not clear what size of probabilistic graphical models will find mainstream usage in machine learning applications and exploring the uses of different models will be an important theme of research as these devices grow in size. There are two takeaways from the results presented here. Though these values are not comparable to state-of-the-art gan architectures and are on a simple MNIST implementation, they serve the purpose of highlighting that the inclusion of a near-term quantum device is not detrimental to the performance of this algorithm. Secondly, we have demonstrated the framework on the larger, more complex, dataset LSUN bedrooms, Figure 

2. This indicates that the algorithm can be scaled.

V Conclusions

Summary

In this work. we have presented a novel and scalable quantum-assisted algorithm, based on a gan framework, which can learn a implicit latent variable generative model of complex datasets. This work is a step in the development of algorithms that may use quantum phenomena to improve the learning generative models of datasets. This algorithm fulfills the requirements of the three areas outlined by Perdomo-Ortiz et al perdomo2017opportunities : Generative problems, data where quantum correlations may be beneficial, and hybrid. This implementation also allows for use of sparse topologies, removing the need for chaining, requires a relatively small number of variables (allowing for near-term quantum hardware to be applied) and is resistant to noise. Though the key motivation of this work is to demonstrate a functional deep learning framework integrating near-term quantum hardware in the learning process, it builds on classical work by Tarik Arici and Asli Celikyilmaz arici2016associative exploring the effect of learning the feature space and using this distribution as the input to the generator. No claims are made here on the improvements that can be made classically, though it is possible that further research into the associative adversarial architecture will yield improvements to gan design. In summary, we have successfully demonstrated a quantum-assisted gan capable of learning a model of a complex dataset such as LSUN, and compared performance of different topologies.

Further Work

There are many avenues to use quantum annealing for sampling in machine learning, topologies and gan research. Here, we have outlined a framework that works on simple (MNIST) and more complex (LSUN) datasets. We highlight several areas of interest that build on this work. The first is an investigation into how the inclusion of quantum hardware into models such as this can be detected. There are two potential improvements to the model: Quantum terms improve the model of the data distribution; or graphical models, which are classically intractable to learn for example fully connected, integrated into the latent spaces, may improve the latent variable generative model learned. Before investing extensive time and research into integrating quantum models into latent spaces it will be important to note that these improvements are reflected in the overall model of the dataset. That is, that backpropagation does not erase any latent space performance gains. There are still outstanding questions as to the distribution the quantum annealer samples. The pause and reverse anneal features on the D-Wave 2000Q gives greater control over the distribution output by the quantum annealer, and can be used to explore the relationship between the quantum nature of that distribution and the quality of the model trained by a quantum Boltzmann machine marshall2018power . It is also not clear what distribution is the ‘best’ for learning a model of a distribution. It could be that efforts to decrease the operating temperature of a quantum annealer to boost performance in optimisation problems will lead to decreased performance in ml applications, as the diversity of states in a distribution decreases and probabilities accumulate at a few low energy states. There are interesting open questions as to the optimal effective temperature of a quantum annealer for ml applications. This question fits within a broad are for research in ml asking which distributions are most useful for ml and why. For this simple implementation, the quantum sampling sparse graph performance is comparable to the complete and restricted topologies. Though in optimised implementations we expect divergent performance, the sparse graph serves the purpose of demonstrating the qaaan architecture. Additionally, we have highlighted sparse classical graphical models for use in the architecture demonstrated on LSUN bedrooms. Though they have reduced expressive power there are many more applications for current quantum hardware; for example a fully connected graphical model would require in excess of 2048 qubits (the number available on the D-Wave 2000Q) to learn a model of a standard MNIST dataset, not to mention the detrimental effect of the extensive chains. A sparse D-Wave 2000Q native graph (Chimera) conversely would only use 784 qubits. This is a stark example of how sparse models might be used in lieu of models with higher connectivity. Investigations finding the optimal balance between the complexity of a model, resulting overhead required by embedding, and the affect on both on performance are needed to understand how future quantum annealers might be used for applications in ml.

Acknowledgements

We would like to thank Marcello Benedetti for conversations full of his expertise and good humour. We would also like to thank Thomas Vandal, Rama Nemani, Andrew Michaelis, Subodh Kalia and Salvatore Mandra for useful discussions and comments. We are grateful for support from NASA Ames Research Center, and from the NASA Earth Science Technology Office (ESTO), the NASA Advanced Exploration systems (AES) program, and the NASA Transformative Aeronautic Concepts Program (TACP). We also appreciate support from the AFRL Information Directorate under grant F4HBKC4162G001 and the Office of the Director of National Intelligence (ODNI) and the Intelligence Advanced Research Projects Activity (IARPA), via IAA 145483. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, AFRL, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright annotation thereon.

References