Generative Tensor Network Classification Model for Supervised Machine Learning

03/26/2019 ∙ by Zheng-Zhi Sun, et al. ∙ Capital Normal University 18

Tensor network (TN) has recently triggered extensive interests in developing machine-learning models in quantum many-body Hilbert space. Here we purpose a generative TN classification (GTNC) approach for supervised learning. The strategy is to train the generative TN for each class of the samples to construct the classifiers. The classification is implemented by comparing the distance in the many-body Hilbert space. The numerical experiments by GTNC show impressive performance on the MNIST and Fashion-MNIST dataset. The testing accuracy is competitive to the state-of-the-art convolutional neural network while higher than the naive Bayes classifier (a generative classifier) and support vector machine. Moreover, GTNC is more efficient than the existing TN models that are in general discriminative. By investigating the distances in the many-body Hilbert space, we find that (a) the samples are naturally clustering in such a space; and (b) bounding the bond dimensions of the TN's to finite values corresponds to removing redundant information in the image recognition. These two characters make GTNC an adaptive and universal model of excellent performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 4

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Machine learning incorporating with the principles of quantum mechanics forms a novel interdisciplinary field known as quantum machine learning Biamonte et al. (2017). Among many sub-directions, machine learning in quantum space is currently under hot debate. The quantum space, also called Hilbert space or quantum-enhanced feature space, is where the quantum states and operators live. The quantum classifiers defined in such a space have been proposed, which are expected to work on quantum hardwares such as superconducting processors Havlicek et al. (2019); Schuld and Killoran (2019).

In recent years, booming progresses have been made by combining quantum physics and machine learning through tensor network (TN) Cichocki et al. (2016, 2017); Glasser et al. (2018). TN is a powerful tool that originates from quantum many-body physics and quantum information sciences; it can be applied to efficiently deal with the states and operators defined in many-body Hilbert space whose dimension increases exponentially with the number of sites (or physical particles) Verstraete et al. (2008); Orús (2014); Ran et al. (2017); Evenbly and Vidal (2011); Bridgeman and Chubb (2017); Schollwöck (2011); Cirac and Verstraete (2009a)

. As a novel extension, TN is considered as a universal model for supervised and unsupervised learning

Stoudenmire and Schwab (2016); Liu et al. (2017, 2018); Han et al. (2018); Cheng et al. (2018); Pestun and Vlassopoulos (2017); Cheng et al. (2019); Chen et al. (2018); Guo et al. (2018). Its applications on, e.g., image recognition, already exhibit competitive performance to the conventional models such as neural networks.

With the underlying connections between TN and quantum circuits Kak (1995); Schuld and Killoran (2019); Benedetti et al. (2018); Mitarai et al. (2018); Rebentrost et al. (2014); Lloyd et al. (2013); Liu and Wang (2018); Zeng et al. (2018); McClean et al. (2016); Farhi et al. (2014); Farhi and Neven (2018), TN sheds new lights on quantum computation of machine learning tasks Schuld et al. (2015); Arunachalam and de Wolf (2017); Wiebe et al. (2012). For instance, a training algorithm Liu et al. (2017) inspired by the multiscale entanglement renormalization ansatz Cincio et al. (2008) allows using unitary gates or isometries to construct the TN for machine learning. Most recently, Huggins et al.

proposed that the quantum circuit corresponding to a TN can be easily designed by one- and two-qubit unitary gates

Huggins et al. (2019). The appealing perspective of TN in quantum machine learning urges us to understand deeply the underlying characters of the TN machine learning, and to develop novel TN approaches of higher performance. However, some fundamental questions are still untouched. Among others, it is elusive about possible advantages of TN machine learning in quantum many-body space, compared with the models (e.g., neural network) that learn the data in the original multiple-scalar space.

In this work, we propose a generative TN classification (GTNC) model for supervised learning (Fig.1) in many-body Hilbert space (denoted as ). The GTNC is formed by several generative TN’s; each generative TN is a quantum state defined in and is trained as the generative model for the corresponding class of images Han et al. (2018). For a given sample, the classification is done by finding the generative TN with the smallest Euclidean distance (fidelity) to this sample in . In other words, the classification is given by the boundary, from which the Euclidean distances to the generative TN’s are equal. With the MNIST Deng (2012) and fashion-MNIST Xiao et al. (2017) datasets, the GTNC shows remarkable efficiency and accuracy by comparing with several existing methods including the discriminative TN machine learning method Stoudenmire and Schwab (2016), supportive vector machines (SVM’s) Cortes and Vapnik (1995), and naive Bayes classifiers Rish et al. (2001).

Figure 1: (a) The images to be classified (digits ‘4’ and ‘9’, for example). (b) A sketch of the distribution of the samples after mapping to the many-body Hilbert space by the given feature map. Since the exponentially large space cannot be shown in a figure, we use the three-dimensional space instead just for illustration. Note the blue and red crosses stand for digits ’4’ and ’9’, respectively. (c) The distribution of images after mapping to the space, with the logarithmic fidelity between the sample and the generative TN of the -th class. The dash line gives the boundary for classification, on which we have .

Two key advantageous characters of GTNC are discussed. Firstly by computing the Euclidean distances (i.e., fidelity) among the samples and the generative TN’s, we show that the samples mapped to the many-body Hilbert space are naturally clustering. It implies that the classification can be efficiently and accurately done in such a space. The clustering should be an advantage from the space . Though the idea of mapping to such a space of a much higher dimension is analogous to SVM, better accuracy is achieved with GTNC. Secondly by comparing with a lazy-learning baseline model, we show that bounding the bond dimensions of the TN’s to finite values corresponds to removing redundant information in the image recognition. The relation to the quantum entanglement can be addressed.

Ii Generative tensor network classification algorithm for supervised learning

The training of GTNC is to obtain the generative TN ( with the total number of classes in the classification task) for each of the classes. We use the algorithm proposed by Han et al. Han et al. (2018). To begin with, one builds a one-to-one map called feature map Stoudenmire and Schwab (2016); Novikov et al. (2016), which maps the images to a vector space known as many-body Hilbert space (denoted as ) in quantum physics. For example, the feature map that transforms the -th pixel (normalized so that ) to a two-component vector can be written as

(1)

In this way, one image that consists of pixels is mapped to the direct product of vectors as . Physically, can be regarded as the product state of qubits. Each qubit has two components, equivalent to a spin-. Note that it is possible to generalize the feature map to be -component with . Then one image is mapped to a vector defined in the -dimensional vector space.

TN is utilized as the generative model of each class Verstraete et al. (2008); Cirac and Verstraete (2009b); Schollwöck (2011); Orús (2014). In fact, the generative models are quantum states of bodies defined in

, which capture the joint probability distributions of the corresponding sets. In quantum many-body physics, TN has been shown as an efficient and power tool to deal with quantum many-body states, where the computational complexity can be reduced from exponentially-hard to polynomial-hard. In Ref.

Han et al. (2018)

, the generative TN is trained with a gradient algorithm that minimizes the cost function of Kullback–Leibler divergence

Kullback and Leibler (1951).

After training the generative TN’s , a given sample can be classified by comparing the Euclidean distances in between this sample and . We choose the fidelity to measure the distance, which is defined as

(2)

with the sample after the feature map. Note that in quantum information, fidelity is a measurement of distance between two quantum states. The classification is indicated by finding the largest fidelity, i.e., . One can find the pseudo code of GTNC in Sec.V.1.

Iii Experiments

iii.1 GTNC: an adaptive generative classification model

Figure 2: The testing accuracy of MNIST dataset using GTNC, baseline model I, naive Bayes classifiers and SVM’s. is the parameter that determines the complexity of GTNC. The hyper-parameters in SVM’s are taken as the following: (1) loss=hinge, C=100, multi-class=crammer-singer, penalty=12; (2) loss=hinge, C=1, multi-class=crammer-singer, penalty=12; (3) C=100, kernel=sigmoid; (4) C=100, kernel=poly.

On the MNIST dataset, GTNC is compared with other well-established methods (Fig.2), i.e., classical generative classifiers (naive Bayes classifiers), high-dimensional classifiers (SVM’s) and a baseline model which is a lazy-learning model using feature map without TN (see Eq.3). For GTNC, different bond dimensions are taken, which controls the number of variational parameters (see Sec.V.1 for details). We also testify on the fashion-MNIST. The 10-class testing accuracy of GTNC reaches around , while for the SVM’s it varies from to depending on the parameters Xiao et al. (2017). For the naive Bayes classifiers, the testing accuracy is no higher than for the fashion-MNIST dataset.

The experiments given by Fig. 2 reveal several advantages of GTNC. One is that the accuracy of GTNC significantly surpasses the naive Bayes classifiers. Note that it is usual to use the discriminative models to do classification, such as convolutional neural networks (CNN’s). The state-of-the-art accuracy is Ciresan et al. (2012) for MNIST and Bhatnagar et al. (2017) fashion-MNIST, respectively. It is true that the accuracy of GTNC is competitive but still lower than the best discriminative models. Nevertheless, The previous knowledge is necessary for these discriminative models (such as architecture and other hyper-parameters that can largely affect the results) to reach the best accuracy. For GTNC in contrast, we use the same architecture of the TN (a 1D TN that is the same as Ref. Han et al. (2018)) and the same hyper-parameters (such as the feature map) for different datasets. We are optimistic to improve further the accuracy of GTNC by optimizing the architecture and hyper-parameters, which however is beyond the scope of the current work.

Secondly, GTNC possesses striking resemblance as well as essential differences compared with the SVM’s. The main idea of SVM is to map the samples to a much higher dimensional space, where it becomes relatively easy to find the boundary for classification. For GTNC, the feature map is to map the samples to the many-body Hilbert space that is exponentially large. By training the generative TN’s in such a space, the boundary for classification is found by computing the fidelity.

It is the underlying difference that makes GTNC superior than SVM. The first difference concerns the kernel function and the space. In most cases of SVM, the mapping method is implicit and determined by a positive-defined matrix which is the distance in the higher dimensional space. This distance matrix is calculated by a certain kernel function like radial basis function kernel with origin data. Mercer’s condition can guarantee that the kernel function will correspond to a higher dimensional space

Mercer and Forsyth (1909). The mapping method is implicit, thus it becomes extremely challenging to analyse how to improve the performance of SVM. The results strongly depend on the space to where the data are mapped, and the hyper-parameters such as the soft margin Cortes and Vapnik (1995). There is no general theories of finding the best parameters of SVM Leslie et al. (2001), which hinders the applications of SVM to new challenging problems.

In comparison, GTNC is more universal and less parameter-dependent. The kernel function in GTNC is determined by the feature map and can be explicitly written, which satisfies Mercer’s condition. For different datasets, we use the same feature map [see Eq. (1)] to transform the data to the higher-dimensional space. It is possible to optimize the kernel function of GTNC to further improve the performance.

The general strategies of GTNC and SVM are also different. The GTNC is formed by several generative models, each of which learns the joint probability distribution of one class of samples. The classification is determined by taking the generative models as the references. Such a strategy works well due to the clustering of the samples in , giving higher accuracy than the naive Bayes classifiers. It is also avoided to input the samples of all classes at the same time to train the classifier(s), which leads to higher efficiency compared with the discriminative algorithms. For SVM, it is to find the classification boundary in the higher-dimensional space. This might also be one reason that the results of SVM largely depends on the chosen space and the hyper-parameters.

We shall note when we build a SVM model with the kernel function from the feature map, the testing accuracy is extremely poor (no more than ). This suggests that the kernel from the feature map works with the algorithms of SVM much worse than the generative TN algorithm Han et al. (2018).

Thirdly, the accuracy of GTNC also surpasses the baseline model I with a moderate bond dimension ( with the dimension of ). The baseline model I is a lazy-learning version of GTNC; the generative model of the -th class can be defined as

(3)

with the number of samples in the -th class. It means is simply the summation of all vectorized samples in the -th class.

For classifying a given sample , we still use Eq. (2) to define the fidelity as . The classification of is given by . Different from GTNC, we do not need to train to classify. The fidelity can be directly calculated as , which makes baseline model I a lazy-learning model.

Let us consider to write in a TN form just like in GTNC. It is expected that the bond dimension of should be extremely large. In other words, in GTNC can be understood as a finite-bond-dimensional approximation of . Surprisingly, the accuracy of GTNC is higher than the baseline model I. It implies that by taking a moderate bond dimension, some redundant information are removed and a better classification can then be made.

The value of actually characterizes the capacity of quantum entanglement that the TN (state) can carry. The entanglement entropy of the TN here is the Rényi entropy of the dataset, which is defined as with the probabilities to have the -th sample and a constant Rényi et al. (1961). The entanglement entropy corresponds to the case of . The Rényi entropy satisfies . In other words, by reducing , the maximum of Rényi entropy becomes smaller. The regularization process in GTNC (known as canonicalization Orús and Vidal (2008)) guarantees that one always discards the less entangled basis. A former work showed that the less-entangled sites (pixels) contain less-important information, which can be discarded without harming too much the accuracy Liu et al. (2018) It means the TN machine learning can be implemented more efficiently with a much smaller number of features. In accordance, our experiments demonstrate that the important information is restored in the highly-entangled basis. It is suggested that with the same number of features, the number of variational parameters in the TN can be safely reduced by removing the less-entangled basis. This shows that the over-fitting of the TN machine learning can be avoided in a controllable manner according to the quantum entanglement.

iii.2 Natural clustering

To further understand the GTNC, we calculate the distances of the training samples in different spaces (Fig.3). The Euclidean distance between the and -th classes in original multi-scalar space is defined as

(4)

where the pixels are normalized as . In the many-body Hilbert space , the fidelity is used to represent distance of two classes, which is defined as

(5)

characterizes the closeness of two classes of images.

Figure 3: (a) Euclidean Distance between samples of MNIST dataset in original space. (b) Fidelities between samples of MNIST dataset in exponentially large space.

In the original space, the distances are at the same order of magnitude for the samples of the same class or of two different classes [Fig.3 (a)]. It means the distribution of the samples in this space is more or less random. In many-body Hilbert space , the fidelity in between different classes is over times lower than those with the same label (the diagonal terms ) [Fig.3 (b)]. In other words, the distance between the samples of different classes is averagely much larger than that between the samples of a same class. This means the samples of the same class are clustering in , which makes it much easier to classify.

The clustering is also consistent with the fair accuracy of the baseline model I. Let us rewrite in terms of the generative TN’s of the baseline model I [Eq. (3)]. We have , which is the fidelity of the two generative TN’s. As the samples are clustering, the distance from a given sample to the correct should be much smaller than that to a wrong one. Thus, the classification can be accurately done by comparing the distances. Note the Euclidean distance in can be deducted from fidelity, satisfying with . We do not enforce the normalization of , though we have , giving .

We also compare GTNC with th existing discriminative TN model (dubbed as the baseline model II) Stoudenmire and Schwab (2016). The pseudo-code can be found in the Sec.V.2. Our experiments show that the efficiency of GTNC is significantly higher than the baseline model II. On MNIST dataset with , the accuracy of GTNC converges to in about seconds of CPU time, while the accuracy of the baseline model II converges to in about seconds 111The CPU model is Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz. . The efficiency differs due to the strategy. For GTNC, one will only input one class of images to train each of the generative TN, and the tensors converge with a small number of iterations. For the baseline model II, one will input the samples of all classes to train the classifier, and it needs much more iterations to converge. The complexity analysis shows that even in one iteration, the computational complexity of GTNC is much lower than baseline model II because the samples only need to be input into the corresponding tensor network in GTNC.

Iv Discussion and Perspective

In this work, we propose the generative TN classification (GTNC) method, and based on it investigate several fundamental issues of the TN machine learning, i.e., the roles played by the feature map and by the bond dimensions of the TN representation. The main contributions of this work are concluded in the following.

  • GTNC is proposed as a generative model for supervised machine learning. The central idea is to individually train the generative TN’s in many-body Hilbert space for samples with different labels, and to classify by comparing the distances. The performance of GTNC surpasses the existing (discriminative) TN-machine learning methods, the Naive Bayes method which are also generative classifier, and the supportive vector machine.

  • The role of feature map is revealed. We show that the feature map of the TN machine learning methods is to map the samples to an exponentially large vector space (called many-body Hilbert space in physics). In such a space, the samples are naturally clustering, where the classification can be easily and accurately done with the help of the generative TN’s.

  • The relation between entanglement and machine learning is discussed, which is useful to avoid over-fitting in a controllable way. The experiments by comparing GTNC with baseline model I imply that the important information is restored in the highly-entangled basis of the generative TN’s. By keeping a proper number of the relatively highly-entangled basis, the accuracy surpasses the baseline model I, where all bases are taken into consideration.

Our work contributes to answering an important question: whether there exist any advantages to solve machine learning problems in the exponentially-large many-body Hilbert space by TN than in the multiple-scalar space by the classical machine learning models. While the previous simulations of the TN machine learning algorithms have given considerable promising results, our experiments show a positive answer in a more explicit way. Such investigations will strongly motivate to develop the quantum computation of machine learning in the many-body Hilbert space, such as the machine learning schemes by quantum circuits Huggins et al. (2019). The benefits or “quantum supremacy” will be not just limited to quantum acceleration, but also to develop more universal, powerful, and well-controlled machine learning models.

V Methods

v.1 Tensor network machine

The functions (vectors or operators) defined in this exponentially large space might have lager potential for generating and learning compared with the multi-scalar functions (such as neural network). One problem is how to handle such an exponentially large space, which is usually NP-hard by classical computers.

The quantum many-body physics provides us a solution called tensor network (TN) , which reduces the cost from exponential to polynomial or linear manner Verstraete et al. (2008); Cirac and Verstraete (2009b); Schollwöck (2011); Orús (2014). TN is an efficient representation of one (large) tensor by writing it as the contraction of several tensors. Matrix product state Perez-Garcia et al. (2006) is one form of the TN’s (Fig.4), which has been utilized in the machine learning field Han et al. (2018); Stoudenmire and Schwab (2016). An MPS formed by tensors can be written as

(6)

is a -dimensional tensor located on the -th site, the indexes and () are dubbed as virtual and physical bonds, respectively. is called the bond dimension of the MPS, which determines the number of parameters and the upper bound of the entanglement that the MPS can capture (see for example Ref. Tagliacozzo et al. (2008)). For simplicity, we assume all elements of the tensors are real numbers. It is easy to see that by contracting all the virtual bonds, is a vector in the space, whose dimension increases exponentially with . Thanks to the TN structure of , the number of parameters is about in the MPS, which scales only linearly with .

Figure 4: Illustration of tensor network.

The gradient descent algorithm is used to optimize the MPS. For example, the -th tensor of the MPS is updated as

(7)

where is the step of the gradient decent algorithm and is the cost function. All tensors are updated iteratively until the preset convergence is reached. One could refer to Refs. Han et al. (2018); Stoudenmire and Schwab (2016) for more details.

v.2 Pseudo code of GTNC and baseline model II

In the GTNC, the algorithm for training the generative MPS’s follows Ref. Han et al. (2018). The baseline modelI algorithm follows Ref. Stoudenmire and Schwab (2016). And the pseudo codes are shown in Al.V.2 and Al.V.2.

  Algorithm 1 - GTNC

 

1::Step size;
2::decay rate for Step size;
3: (Initialize time step) : Stochastic objective function with parameters [a]
4:: Initial parameter vector

(Initialize moment vector)

5:while not converged do
6:     
7:     for  do
8:         
9:         
10:         
11:         if  then
12:              (

is an unitary matrix matrixes.

is an upper triangular matrix. And they satisfy )
13:              
14:         end if
15:     end for
16:     if  then
17:         
18:     end if
19:end while
20:return

 

  Algorithm 2 - baseline model II

 

1::Step size;
2::decay rate for Step size;
3: (Initialize time step) : Stochastic objective function with parameters [b]
4:: Initial parameter vector (Initialize moment vector)
5:while not converged do
6:     
7:     for  do
8:         
9:         
10:         
11:         ( and are unitary matrixes. is diagonal matrix. And they satisfy )
12:          [The process of moving label is shown in Fig. 5]
13:     end for
14:     if  then
15:         
16:     end if
17:end while
18:return

 

Note: [a] The cost function is chosen as follow

(8)

Note: [b] The cost function is chosen as follow

(9)
Figure 5: Illustration of moving label index.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (11834014 and 11474279), the National Key RD Program of China (2018YFA0305800), and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB28000000). S.J.R. is supported by Beijing Natural Science Foundation (1192005 and Z180013) and Foundation of Beijing Education Committees under Grants No. KZ201810028043.

References

  • Biamonte et al. (2017) J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Nature 549, 195 (2017).
  • Havlicek et al. (2019) V. Havlicek, A. D. Córcoles, K. Temme, A. W. Harrow, J. M. Chow, and J. M. Gambetta, Nature 567, 209 (2019).
  • Schuld and Killoran (2019) M. Schuld and N. Killoran, Phys. Rev. Lett. 122, 040504 (2019).
  • Cichocki et al. (2016) A. Cichocki, N. Lee, I. Oseledets, A.-H. Phan, Q. Zhao, and D. P. Mandic, Foundations and Trends® in Machine Learning 9, 249 (2016), ISSN 1935-8237.
  • Cichocki et al. (2017) A. Cichocki, A.-H. Phan, Q. Zhao, N. Lee, I. Oseledets, M. Sugiyama, and D. P. Mandic, Foundations and Trends® in Machine Learning 9, 431 (2017), ISSN 1935-8237.
  • Glasser et al. (2018) I. Glasser, N. Pancotti, and J. I. Cirac, arXiv:1806.05964 (2018).
  • Verstraete et al. (2008) F. Verstraete, V. Murg, and J. I. Cirac, Advances in Physics 57, 143 (2008).
  • Orús (2014) R. Orús, Annals of Physics 349, 117 (2014), ISSN 0003-4916.
  • Ran et al. (2017) S.-J. Ran, E. Tirrito, C. Peng, X. Chen, G. Su, and M. Lewenstein, arXiv:1708.09213 (2017).
  • Evenbly and Vidal (2011) G. Evenbly and G. Vidal, Journal of Statistical Physics 145, 891 (2011), ISSN 1572-9613.
  • Bridgeman and Chubb (2017) J. C. Bridgeman and C. T. Chubb, Journal of Physics A: Mathematical and Theoretical 50, 223001 (2017).
  • Schollwöck (2011) U. Schollwöck, Annals of Physics 326, 96 (2011), ISSN 0003-4916, january 2011 Special Issue.
  • Cirac and Verstraete (2009a) J. I. Cirac and F. Verstraete, Journal of Physics A: Mathematical and Theoretical 42, 504004 (2009a).
  • Stoudenmire and Schwab (2016) E. Stoudenmire and D. J. Schwab, in Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Curran Associates, Inc., 2016), pp. 4799–4807.
  • Liu et al. (2017) D. Liu, S.-J. Ran, P. Wittek, C. Peng, R. B. García, G. Su, and M. Lewenstein, arXiv:1710.04833 (2017).
  • Liu et al. (2018) Y. Liu, X. Zhang, M. Lewenstein, and S.-J. Ran, arXiv:1803.09111 (2018).
  • Han et al. (2018) Z.-Y. Han, J. Wang, H. Fan, L. Wang, and P. Zhang, Phys. Rev. X 8, 031012 (2018).
  • Cheng et al. (2018) S. Cheng, J. Chen, and L. Wang, Entropy 20 (2018), ISSN 1099-4300.
  • Pestun and Vlassopoulos (2017) V. Pestun and Y. Vlassopoulos, arXiv:1710.10248 (2017).
  • Cheng et al. (2019) S. Cheng, L. Wang, T. Xiang, and P. Zhang, arXiv:1901.02217 (2019).
  • Chen et al. (2018) Y. W. Chen, K. Guo, and Y. Pan, in 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC) (2018), pp. 311–315.
  • Guo et al. (2018) C. Guo, Z. Jie, W. Lu, and D. Poletti, Phys. Rev. E 98, 042114 (2018).
  • Kak (1995) S. Kak, Information Sciences 83, 143 (1995).
  • Benedetti et al. (2018) M. Benedetti, D. Garcia-Pintos, Y. Nam, and A. Perdomo-Ortiz, arXiv:1801.07686 (2018).
  • Mitarai et al. (2018) K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, arXiv:1803.00745 (2018).
  • Rebentrost et al. (2014) P. Rebentrost, M. Mohseni, and S. Lloyd, Phys. Rev. Lett. 113, 130503 (2014).
  • Lloyd et al. (2013) S. Lloyd, M. Mohseni, and P. Rebentrost, arXiv:1307.0411 (2013).
  • Liu and Wang (2018) J.-G. Liu and L. Wang, Phys. Rev. A 98, 062324 (2018).
  • Zeng et al. (2018) J. Zeng, Y. Wu, J.-G. Liu, L. Wang, and J. Hu, arXiv:1808.03425 (2018).
  • McClean et al. (2016) J. R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, New Journal of Physics 18, 023023 (2016).
  • Farhi et al. (2014) E. Farhi, J. Goldstone, and S. Gutmann, arXiv:1411.4028 (2014).
  • Farhi and Neven (2018) E. Farhi and H. Neven, arXiv:1802.06002 (2018).
  • Schuld et al. (2015) M. Schuld, I. Sinayskiy, and F. Petruccione, Contemporary Physics 56, 172 (2015).
  • Arunachalam and de Wolf (2017) S. Arunachalam and R. de Wolf, SIGACT News 48, 41 (2017), ISSN 0163-5700.
  • Wiebe et al. (2012) N. Wiebe, D. Braun, and S. Lloyd, Phys. Rev. Lett. 109, 050505 (2012).
  • Cincio et al. (2008) L. Cincio, J. Dziarmaga, and M. M. Rams, Phys. Rev. Lett. 100, 240603 (2008).
  • Huggins et al. (2019) W. Huggins, P. Patil, B. Mitchell, K. B. Whaley, and E. M. Stoudenmire, Quantum Science and Technology 4, 024001 (2019).
  • Deng (2012) L. Deng, IEEE Signal Processing Magazine 29, 141 (2012), ISSN 1053-5888.
  • Xiao et al. (2017) H. Xiao, K. Rasul, and R. Vollgraf, arXiv:1708.07747 (2017).
  • Cortes and Vapnik (1995) C. Cortes and V. Vapnik, Machine Learning 20, 273 (1995), ISSN 1573-0565.
  • Rish et al. (2001) I. Rish et al., in

    IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence

    (2001), vol. 3, pp. 41–46.
  • Novikov et al. (2016) A. Novikov, M. Trofimov, and I. Oseledets, arXiv:1605.03795 (2016).
  • Cirac and Verstraete (2009b) J. I. Cirac and F. Verstraete, Journal of Physics A: Mathematical and Theoretical 42, 504004 (2009b).
  • Kullback and Leibler (1951) S. Kullback and R. A. Leibler, The Annals of Mathematical Statistics 22, 79 (1951), ISSN 00034851.
  • Ciresan et al. (2012)

    D. C. Ciresan, U. Meier, and J. Schmidhuber, 2012 IEEE Conference on Computer Vision and Pattern Recognition pp. 3642–3649 (2012).

  • Bhatnagar et al. (2017) S. Bhatnagar, D. Ghosal, and M. H. Kolekar, in 2017 Fourth International Conference on Image Information Processing (ICIIP) (2017), pp. 1–6.
  • Mercer and Forsyth (1909) J. Mercer and A. R. Forsyth, Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character 209, 415 (1909).
  • Leslie et al. (2001) C. Leslie, E. Eskin, and W. S. Noble, in Biocomputing 2002 (World Scientific, 2001), pp. 564–575.
  • Rényi et al. (1961) A. Rényi et al., in Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics (The Regents of the University of California, 1961).
  • Orús and Vidal (2008) R. Orús and G. Vidal, Phys. Rev. B 78, 155117 (2008).
  • Perez-Garcia et al. (2006) D. Perez-Garcia, F. Verstraete, M. M. Wolf, and J. I. Cirac, arXiv preprint quant-ph/0608197 (2006).
  • Tagliacozzo et al. (2008) L. Tagliacozzo, T. R. de Oliveira, S. Iblisdir, and J. I. Latorre, Phys. Rev. B 78, 024410 (2008).