Quantum Deep Learning: Sampling Neural Nets with a Quantum Annealer

07/19/2021 ∙ by Catherine F. Higham, et al. ∙ University of Glasgow 0

We demonstrate the feasibility of framing a classically learned deep neural network as an energy based model that can be processed on a one-step quantum annealer in order to exploit fast sampling times. We propose approaches to overcome two hurdles for high resolution image classification on a quantum processing unit (QPU): the required number and binary nature of the model states. With this novel method we successfully transfer a convolutional neural network to the QPU and show the potential for classification speedup of at least one order of magnitude.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Motivation

Deep learning approaches are being applied and refined for classification tasks across many data types (images, video and audio) with high levels of success [14, 11, 2]. There are many applications where classically trained convolutional neural networks (CNNs) perform very well, sometimes better than human experts [7, 3, 16]. However in some circumstances, including security, defence and automated transport, safety-critical benefits would arise if classifications can be computed more quickly. This suggests an important role for quantum computing and we give a proof of principle demonstration that quantum annealing has the potential to address this issue.

The manuscript is organised as follows. In Section 2

we discuss the quantum annealing technology, the underlying quantum physics and the machine learning tool that we implement. We also discuss related work and summarise our main contributions. In Section

3 we give more detail about the implementation and justify our algorithmic choices. Computational results are given in Section 4. We finish in Section 5 with brief discussion.

2 Background and Related Work

2.1 Quantum Annealing Technology

The current D-Wave Advantage System 1.1 contains around 5,000 qubits and 35,000 couplers

[5]. These qubits can be physically coupled to form networks with real-valued coefficients denoting coupling strength and individual on/off biases. Connectivity per qubit is limited to 15 couplers but can be extended by forming chains of qubits. These coefficients define and constrain relationships between qubits and form a quadratic binary model capable of expressing a range of behaviours. The parameter coefficients of a given model are embedded on the D-Wave network and by posing the problem as energy minimisation, quantum annealing is used to find the low energy states (on/off) of the qubits and hence the most likely formations. The ability to rapidly sample from many states and hence characterise the shape of the energy landscape is a key benefit of this technology.

2.2 Underlying Quantum Physics: The Hamiltonian

A classical Hamiltonian gives a mathematical description of some physical system in terms of its energies. For most non-convex Hamiltonians, finding the minimum energy state is an NP-hard problem that classical computers cannot solve efficiently. In quantum annealing, the system begins in the lowest-energy state of an initial Hamiltonian, , and as it anneals introduces the problem Hamiltonian, . To do this, a time-like parameter and annealing functions and are introduced such that and . Hence, as the system is annealed, decreases, increases and we approach the desired solution states. This approach has the potential for significant benefits in terms of both speed and accuracy, compared with classical computing technology.

For the D-Wave system [5], the Hamiltonian is expressed as follows

(1)

where are Pauli matrices operating on a qubit , and and are the qubit local field values or biases and coupling strengths or weights, respectively. In the final state, the qubits take values of either 0 or 1. Hence this provides a classical solution for the problem Hamiltonian defined by the biases, and the coupling weights, .

2.3 Machine Learning Approaches: Boltzmann Machines

Boltzmann machines [9]

are probabilistic models with an energy-based distribution that define a probability for each of the

discrete states in a binary vector given by.

(2)

Here is an energy function parameterised by , and is the normalizing coefficient, also known as the partition function, which ensures that sums to 1 over all the possible states of . The energy function can be represented via a quadratic form in which the upper-triangular matrix encapsulates the parameters of the quadratic energy function defined by

(3)

where and are the biases and correlation weights respectively. This expression makes clear the connection between Boltzmann machines, quadratic binary models, and the final Hamiltonian in the second part of equation (1

). This suggests that Boltzmann machines are good candidates for training and evaluation on the D-Wave quantum annealer as the problem can be framed in such a way that the solution to the final Hamiltonian is the low energy state of the problem. In this work we are interested in transferring classically learned weights to a quantum system for speed up and hence we focus on the evaluation task. A restricted Boltzmann machine (RBM) is a special type of Boltzmann machine with a symmetrical bipartite structure

[10]

. The set of binary variables is divided into visible (input),

, and hidden, , variables. These are analogous to the retina and brain, respectively. The hidden variables allow for more complex dependencies among visible variables and are used to learn a stochastic generative model over a set of inputs. All visible variables connect to all hidden variables, but no variables in the same layer are linked. In the classical setting this limited connectivity makes inference and therefore learning easier because analytical expressions can be found for the conditional probabilities Gibbs sampling may be used to sample from and then from

, leading to an estimated expected value of the model required to update the weights, a process known as contrastive divergence (CD)

[10].

2.4 Related Work

Finding the optimal parameter values for an artificial neural network requires large amounts of data and many cycles of updates which costs time and energy. Training on a QPU has the potential to produce better models and more quickly than a digital processor. [1]

investigated estimating the model expectations of RBMs using samples on a 512-qubit D-Wave machine and successfully trained a model with up to 32 visible nodes and 32 hidden nodes per RBM layer. In their tests, they found that this approach achieves comparable or better accuracy with significantly fewer iterations of generative training than conventional CD-based training on a coarse-grained version of the MNIST data set.

[15]

also investigated the feasibility of using the D-Wave as a sampler for machine learning. Their work described a hybrid system that combined a classical deep neural network autoencoder with a quantum annealing RBM. Their method overcame two key limitations in the 2000-qubit D-Wave processor, namely the limited number of qubits available to accommodate typical problem sizes for fully connected quantum objective functions and samples that are binary pixel representations. Their hybrid autoencoder approach indicated advantage for quantum annealing relative to the use of a classical computer implementation for image-based machine learning and hinted at even more promising results for the next generation D-Wave quantum system.

In [6]

, the model expectation of gradient learning for RBM was calculated using a quantum annealer (D-Wave 2000Q), giving much faster results than Markov chain Monte Carlo used in contrastive divergence. Most Boltzmann machines use restricted topologies that exclude looping connectivity, as such connectivity creates complex distributions that are difficult to sample.

[13] used an open-system quantum annealer to sample from complex distributions and implemented Boltzmann machines with looping connectivity.

In this work our starting point is a classically trained convolutional neural network (CNN) that maps real valued features to a classification label. Unlike [1], [15] and [6] we do not train on the QPU but investigate transferring the CNN problem to the QPU in order to obtain solutions. However we encounter similar issues: qubit number, binary nature and connectivity. This works looks at how these issues can be addressed on the recent D-Wave Advantage in the context of deep learning transfer.

2.5 Contributions

The main contribution of this work is to show that a trained artificial neural network can be transferred to a quantum computing setting. To do this we use the framework of energy minimisation involving an appropriate quadratic form, where quantum annealing provides samples from the low energy, most likely model states. Results, obtainable in microseconds rather than milliseconds, can be used to estimate a predictive class score. We show how to design a quadratic binary model, the engine of quantum annealing, to behave like a neural network, combining deep learned parameters and layer structures from a classically trained neural network with quadratic binary model parameters. Constraint parameters are adapted so that designated class units act together as a softmax classification unit. We then test this approach on digit image data by finding an appropriate embedding on the D-Wave QPU and transferring the coupling and bias parameters from the classical model to the qubits. Key barriers to scaling up to high resolution image/video processing are the binary nature of the variables, the number of qubits that could be assigned to visible units and the limited number of couplers per qubits. We address these issues by showing how real-valued features can be introduced into the system and by introducing novel pooling qubits that reduce the number of connections required. These pooling qubits also add nonlinear behaviour required to model data. Our approach is outlined in Section 3 and the results of experiments on the D-Wave in Section 4.

3 Methods: Framing Classification as a Quadratic Binary Model with Constraints

3.1 Neural Network Layers

A typical neural network classification model takes features, , as input and passes these features through a series of mappings and activations leading to a classification score [8]

. These mappings typically include a pooling function, such as max pooling,

, a linear transform function,

, where is a matrix and is a vector, and softmax activation, , to determine a class score. The values of the parameters and in each layer are determined through training with a cost function to minimise the class score error. In our work we set to reduce the number of parameters.

3.2 Quadratic Binary Model with Constraints

A quadratic binary model defines an energy-based network of binary random variables with real-valued parameters for biases and correlation weights. Constraints can be added to the biases. Energy minimisation involves finding the binary values of the model states that result in the lowest energy levels. This minimisation problem is also referred to as quadratic binary optimisation or QUBO.

Let , , and be binary random variables where denote visible units, and denote hidden pooling and classification units, respectively. , , , are the number of visible units, the number of visible units in a pool, the number of hidden pooling units and the number of class units, respectively. For pooling, subsets of , each containing four neighbouring elements are connected to a reducing the possible connections from to . All are connected to all forming a fully connected classification layer. Further, are interconnected so that a decision can be made about the most likely state equivalent to a softmax activation.

The energy of this system involving , and and the connections described above can be introduced, as in equation (3), as follows

(4)

where , , are the biases of , , respectively, and , , are the coupling strengths between and , and and between and respectively.

In order that the quadratic binary model behaves like the neural network classification model, the parameters, , are set as follows to achieve pooling and classification.

3.3 Downsampling with Pooling Units

The objective of pooling for the hidden units is to establish whether features are present. If features are present, then is on and contributions are made to the classification units . If features are not present then is off and no contributions are made to the classification units. Consider the contribution a pooling subset, makes to the energy equation,

(5)

If are the normalised feature values of , real values between and , then neutralising by setting and weighting the interaction between and by setting lowers the energy of the system and increases the likelihood that is on if a feature is present.

3.4 Establishing a Classification Unit

We assume that the weights of a CNN classification layer, , have been optimised so that the class position of the maximum value of the predicted class vector , where is a feature vector, agrees with the class of . The contribution on the classification units comes from an interaction with () and interactions with other (), see equation (4). Concerning the interaction between and , we set .

For the interaction of values with each other we constrain the model so that . This is achieved by finding the values of and that minimise using the following expansion

(6)

with solutions and .

3.5 Transfer to QPU

The weights and biases are physically transferred to the QPU in a sparse matrix format, , with the biases for each node along the diagonal in position and the weights connecting each pair in position . An illustrative example of is shown in Figure 3. Before the quantum annealing process, a mapping has to be found that physically embeds the

matrix on the D-Wave graph. This embedding is found using the available heuristic tool,

minorminer by [4], at the beginning of the first run and then reused in subsequent runs.

The D-Wave sampler is called and the returned samples are ordered lowest energy to highest energy and summed across classes to provide a consensus class score and hence classification.

(a) Structure of
(b) The matrix Q
Figure 3: Illustrative Q matrix for feature pooling and classification. Five sets of four features are each coupled, with weights , to a pooling node, see columns labelled to . These nodes are further fully connected to five class nodes with coupling strength determined by the learned CNN weights, . Further connections are added between the class nodes in order to constrain the class states to just one being on, . The biases, associated with each node, including the input features, are placed along the diagonals.

3.6 Real Valued Features and Scaling Up

In work, e.g. [1, 6], where where RBMs are trained on D-Wave the visible units are clamped by adding clamp strength as a constraint on the biases ensuring that will retain its value in the annealing process. Here, we use the fact that is known, so the second term in equation (3) becomes dependent on only and can be seen as biases. In summary, real valued information about the input features entered the system through the biases placed on the visible units .

Fully connected layers involving large numbers of units are not suitable for the QPU. Instead we investigate the feasibility of using convolutional connections where connections between layers are reduced to a subset of units in each layer but where weights are shared for local and global connectivity. How these weights form part of the energy system is illustrated in Figure 4.

Figure 4: Illustrative Q Matrix for a Convolutional Layer comprising two convolutional filters, and , each containing weights, passing over an image input (

) with stride 2. This results in two downsized features (

) where each interaction between input nodes, denoted , and output nodes, denoted , can be represented in matrix form as illustrated above. The interactions include biases (local field values), , along the diagonal, and shared weights, and , down the columns and respectively. Each column denotes one of the eight output features.

4 Results

In this Section we present results from sampling with a quantum annealer for two classification neural networks with five and ten classes, respectively. We finish the Section with a discussion on scaling up and timings.

4.1 Binary Digits 0:4 Classification

A small classification neural network was classically trained to distinguish between binary images for digits and optimised model parameter values, , were obtained. The network comprised 30 nodes: 20 visible feature nodes, 5 hidden nodes and 5 classification nodes (see Appendix for more details). The Q values for the QUBO were set as follows: , , and . Input to this model were binary images for digits , see Figure 5, and the features included in the quadratic binary model were extracted from the convolutional layer after sigmoid activation. The biases and weights were then transferred to the QPU and 1000 samples taken. The 100 samples with the lowest energy were averaged to estimate the expected value of each classification node. The results for the five nodes assigned to the five classes clearly show dominance in the correct class, see Table 1. This simple example shows proof of principle that the weights from a classically trained CNN can be used to connect qubits for classification tasks.

Figure 5: Binary digits .

4.2 MNIST classification

We investigate using the high level abstracted features from the penultimate layers of a neural network model trained using the MNIST database [12]. The quadratic binary model comprised 50 nodes: 40 feature nodes and 10 classification nodes. The Q values were , and where was obtained from the classification layer of the MNIST CNN. The biases and weights were then transferred to the QPU and 1000 samples taken. The 100 samples with the lowest energy were averaged to estimate the expected value of each classification node. The results for the ten nodes assigned to the ten classes show dominance in the correct class for digits 0, 3, 5 and 6. There was confusion between digits 1, 7 and 9. Digits 2 and 4 were never predicted whereas digit 8 was sometimes predicted but with a high error rate, see Table 1. This experiment was performed on only one randomly chosen digit from each class and with one embedding for Q. With physical experiments, variations are to be expected with the set-up, for example within the individual qubits and couplers. Future work will run this experiment many more times to reduce this variation and we would expect to see more reliable predictions.This example shows that including high level abstracted features arising from one source or a combination as sources as biases in the the quadratic binary models opens up the classification task to a broader field.

Class Predictions
Input Classes 0 1 2 3 4
0 100 0 4 0 17
1 0 100 0 68 0
2 0 0 100 2 0
3 0 0 0 100 86
4 47 0 27 22 100
Table 1: Binary Digit Classification. Class predictions columns based on 100 samples with the lowest energy out of 1000 samples are shown for each input class rows. All digits show dominance in the correct class.
Class Predictions
Input Classes 0 1 2 3 4 5 6 7 8 9
0 98 1 29 0 0 0 17 60 0 0
1 10 100 0 0 0 0 0 100 0 0
2 36 100 0 0 0 0 0 100 0 0
3 0 14 1 86 0 79 2 29 0 0
4 55 1 2 12 0 14 20 1 31 46
5 0 0 0 69 0 100 40 0 0 0
6 88 0 0 0 0 100 100 0 0 0
7 0 100 0 0 0 0 0 100 0 0
8 96 38 0 0 0 0 0 50 42 0
9 0 100 0 0 0 0 0 100 0 0
Table 2: MNIST Classification. Class predictions columns based on 100 samples with the lowest energy out of 1000 samples are shown for each input class rows

. Digits 0,1,3,5,6 and 7 show dominance in the correct class. However there is confusion between 1,7 and 9. Digits 2, 4 and 8 are wrongly classified.

4.3 Scaling Up and Timing

Using our approach we were able to find an embedding and obtain samples from a four layer model comprising 358 nodes: 196 visible nodes, 128 convolutional feature nodes, 24 fully connected hidden nodes and 10 classification nodes. The convolutional window was and eight windows passed over the input in strides of two, resulting in outputs. We repeated the experiment on a classical computer and estimated that classification and printing out would take about 8000 microseconds whereas samples can be obtained from the QPU in 168 microseconds (anneal 20 µs Readout 127 µs Delay 21 µs) suggesting a possible speed up of at least one order of magnitude.

5 Discussion and Conclusions

We have shown that it is possible to frame a classification task as a quadratic binary model using the weights from a classically trained classification neural network. This problem can then be sent to D-Wave’s QPU for one-step quantum annealing. Given a trained network, D-Wave can be used to evaluate it.

Regarding timings, we have focused on the annealing time (around 20 ). Currently, there are time overheads (access, programming, sampling and post-processing) involved with this process but these could be addressed by engineering and pipelines that improve streaming to and from the QPU for specific tasks such as classification.

We addressed two barriers to scaling up: binary input states which restrict types of data that can be analysed and number of couplings between nodes limited by the physical D-Wave graph; and required number of the model variables. We showed that real value features can be considered as biases that can be added directly to the biases of connecting nodes or subject to pooling and connecting node constraints. This illustration opens up the possibility of different types of input features including features derived from other systems or multi-faceted features from complex systems. We introduced pooling and convolutional couplings. Pooling serves two main purposes. First, down-sampling the input which can reduce the size of the network and hence the number of parameters needed (especially important for quantum computers), and reduce the computational cost, and can improve generalisation. Second, adding non-linearity, required for better model expressiveness, to the linear maps. Convolutional filters also reduce the connectivity load, and by extracting features from spatial settings improve model expressiveness. The largest network we were able run had four layers and 358 nodes.

In summary, providing neural networks with a quantum engine has the potential, assuming the pipeline for streaming data to the quantum computer can be made more efficient, to obtain classification results from high dimensional sources with speeds at least an order of magnitude.

References

  • [1] S. H. Adachi and M. P. Henderson (2015) Application of quantum annealing to training of deep neural networks. Note: arXiv External Links: 1510.06356 Cited by: §2.4, §2.4, §3.6.
  • [2] M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba (2016) End to end learning for self-driving cars. Note: arXiv External Links: 1604.07316 Cited by: §1.
  • [3] A. Buetti-Dinh, V. Galli, S. Bellenberg, O. Ilie, M. Herold, S. Christel, M. Boretska, I. V. Pivkin, P. Wilmes, W. Sand, M. Vera, and M. Dopson (2019) Deep neural networks outperform human expert’s capacity in characterizing bioleaching bacterial biofilm composition. Biotechnology Reports 22, pp. e00321. External Links: ISSN 2215-017X, Document, Link Cited by: §1.
  • [4] J. Cai, W. G. Macready, and A. Roy (2014) A practical heuristic for finding graph minors. Note: arXiv External Links: 1406.2741 Cited by: §3.5.
  • [5] D-Wave Systems (2021) . Note: http://www.https://www.dwavesys.com/, @misc{dwave, author = {{D-Wave Systems}}, year = {2021}, title = {}, note = {\url{http://www.https://www.dwavesys.com/}, % Last accessed on 2017-11-30}} Cited by: §2.1, §2.2.
  • [6] V. Dixit, R. Selvarajan, M. A. Alam, T. S. Humble, and S. Kais (2020) Training and classification using a restricted Boltzmann machine on the d-wave 2000q. External Links: 2005.03247 Cited by: §2.4, §2.4, §3.6.
  • [7] A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, pp. 115 – 118. Cited by: §1.
  • [8] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. The MIT Press. External Links: ISBN 0262035618 Cited by: §3.1.
  • [9] G. E. Hinton (2007) Boltzmann machine. Scholarpedia 2 (5), pp. 1668. Note: revision #91076 External Links: Document Cited by: §2.3.
  • [10] G. Hinton (2010) A practical guide to training restricted Boltzmann machines. Technical report Technical Report UTML 2010–003, University of Toronto. Cited by: §2.3.
  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2017-05) ImageNet classification with deep convolutional neural networks. 60 (6). External Links: ISSN 0001-0782, Link, Document Cited by: §1.
  • [12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner (1998-11) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §4.2.
  • [13] J. Liu, K. Yao, and F. Spedalieri (2020) Dynamic topology reconfiguration of Boltzmann machines on quantum annealers. Entropy 22 (11). External Links: Link, ISSN 1099-4300, Document Cited by: §2.4.
  • [14] O. M. Parkhi, A. Vedaldi, and A. Zisserman (2015-09)

    Deep face recognition

    .
    In Proceedings of the British Machine Vision Conference (BMVC), M. W. J. Xianghua Xie and G. K. L. Tam (Eds.), pp. 41.1–41.12. Cited by: §1.
  • [15] J. Sleeman, J. Dorband, and M. Halem (2020) A hybrid quantum enabled RBM advantage: convolutional autoencoders for quantum image compression and generative learning. External Links: 2001.11946 Cited by: §2.4, §2.4.
  • [16] W. Zhou, Y. Yang, C. Yu, J. Liu, X. Duan, Z. Weng, D. Chen, Q. Liang, Q. Fang, J. Zhou, H. Ju, Z. Luo, W. Guo, X. Ma, X. Xie, R. Wang, and L. Zhou (2021) Ensembled deep learning model outperforms human experts in diagnosing biliary atresia from sonographic gallbladder images. Nat Commun 112 (1259). Cited by: §1.

6 Appendix

6.1 Classically Trained Neural Network Specifications

6.1.1 Binary 5 x 5 Digits 0:4 Classification

The classically trained convolutional neural network comprised input, convolutional (filter size , and , stride=2) sigmoid activation, max pooling (window and , stride=2), fully connected (filter size and , and softmax activation layers where denotes width and height. For input sized the resulting features after the convolutional layer are sized and after the pooling layer are sized .