Log In Sign Up

Unsupervised Generative Modeling Using Matrix Product States

by   Zhao-Yu Han, et al.

Generative modeling, which learns joint probability distribution from training data and generates samples according to it, is an important task in machine learning and artificial intelligence. Inspired by probabilistic interpretation of quantum physics, we propose a generative model using matrix product states, which is a tensor network originally proposed for describing (particularly one-dimensional) entangled quantum states. Our model enjoys efficient learning by utilizing the density matrix renormalization group method which allows dynamic adjusting dimensions of the tensors, and offers an efficient direct sampling approach, Zipper, for generative tasks. We apply our method to generative modeling of several standard datasets including the principled Bars and Stripes, random binary patterns and the MNIST handwritten digits, to illustrate ability of our model, and discuss features as well as drawbacks of our model over popular generative models such as Hopfield model, Boltzmann machines and generative adversarial networks. Our work shed light on many interesting directions for future exploration on the development of quantum-inspired algorithms for unsupervised machine learning, which is of possibility of being realized by a quantum device.


page 6

page 8


Tree Tensor Networks for Generative Modeling

Matrix product states (MPS), a tensor network designed for one-dimension...

Information Perspective to Probabilistic Modeling: Boltzmann Machines versus Born Machines

We compare and contrast the statistical physics and quantum physics insp...

Shortcut Matrix Product States and its applications

Matrix Product States (MPS), also known as Tensor Train (TT) decompositi...

Generative modeling with projected entangled-pair states

We argue and demonstrate that projected entangled-pair states (PEPS) out...

Probabilistic Modeling with Matrix Product States

Inspired by the possibility that generative models based on quantum circ...

A Generative Model for Multi-Dialect Representation

In the era of deep learning several unsupervised models have been develo...

SchrödingeRNN: Generative Modeling of Raw Audio as a Continuously Observed Quantum State

We introduce SchrödingeRNN, a quantum inspired generative model for raw ...

I Introduction

Generative modeling, a typical example of unsupervised learning that makes use of huge amount of unlabeled data, lies in the heart of rapid development of modern machine learning techniques 

LeCun et al. (2015)

. Different from discriminative tasks such as pattern recognition, the goal of generative modeling is to model the probability distribution of data and thus be able to generate

new samples according to the distribution. At the research frontier of generative modeling, it is used for finding good data representation and dealing with tasks with missing data. Popular generative machine learning models include Boltzmann Machines (BM) Ackley et al. (1985); Smolensky (1986) and their generalizations Salakhutdinov (2015)

, variational autoencoders (VAE) 

Kingma and Welling (2013)

, autoregressive models 

Uria et al. (2016); Oord et al. (2016), normalizing flows Dinh et al. (2014, 2016); Rezende and Mohamed (2015), and generative adversarial networks (GAN) Goodfellow et al. (2014). For generative model design, one tries to balance the representational power and efficiency of learning and sampling.

There is a long history of the interplay between generative modeling and statistical physics. Some celebrated models, such as Hopfield model Hopfield (1982) and Boltzmann machine Ackley et al. (1985); Smolensky (1986), are closely related to the Ising model and its inverse version which learns couplings in the model based on given training configurations Kappen and de Borja Rodríguez Ortiz (1998); Roudi et al. (2009).

The task of generative modeling also shares similarities with quantum physics in the sense that both of them try to model probability distributions in an immense space. Precisely speaking, it is the wavefunctions that are modeled in quantum physics, and probability distributions are given by their squared norm according to Born’s statistical interpretation. Modeling probability distributions in this way is fundamentally different from the traditional statistical physics perspective. Hence we may refer probability models which exploit quantum state representations as “Born Machines”. Various ansatz have been developed to express quantum states, such as the variational Monte Carlo Foulkes et al. (2001)

, the tensor network (TN) states and recently artificial neural networks 

Carleo and Troyer (2017) . In fact physical systems like quantum circuits are also promising candidates for implementing Born Machines.

In the past decades, tensor network states and algorithms have been shown to be an incredibly potent tool set for modeling many-body quantum states  Schollwöck (2011); Orús (2014). The success of TN description can be theoretically justified from a quantum information perspective Hastings (2007); Eisert et al. (2010)

. In parallel to quantum physics applications, tensor decomposition and tensor networks have also been applied in a broader context by the machine learning community for feature extraction, dimensionality reduction and analyzing the expressibility of deep neural networks 

Bengua et al. (2015); Novikov et al. (2014, 2015); Cohen et al. (2015); Cichocki et al. (2016); Levine et al. (2017).

In particular, matrix product state (MPS) is a kind of TN where the tensors are arranged in a one-dimensional geometry Perez-Garcia et al. (2007). The same representation is referred as tensor train decomposition in the applied math community Oseledets (2011). Despite its simple structure, MPS can represent a large number of quantum states extremely well. MPS representation of ground states has been proven to be efficient for one-dimensional gapped local Hamiltonian Landau et al. (2015). In practice, optimization schemes for MPS such as density-matrix renormalization group (DMRG) White (1992) have been successful even for some quantum systems in higher dimension Stoudenmire and White (2012). Some recent works extended the application of MPS to machine learning tasks like pattern recognition  Stoudenmire and Schwab (2016), classification Novikov et al. (2016) and language modeling Gallego and Orus (2017). Efforts also drew connection between Boltzmann Machines and tensor networks Chen et al. (2018).

In this paper, building on the connection between unsupervised generative modeling and quantum physics, we employ MPS as a model to learn probability distribution of given data with an algorithm which resembles DMRG White (1992). Compared with statistical-physics based models such as the Hopfield model Hopfield (1982) and the inverse Ising model, MPS exhibits much stronger ability of learning, which adaptively grows by increasing bond dimensions of the MPS. The MPS model also enjoys a direct sampling method Ferris and Vidal (2012)

much more efficient than the Boltzmann machines, which require Markov Chain Monte Carlo (MCMC) process for data generation. When compared with popular generative models such as GAN, our model offers a more efficient way to reconstruct and denoise from an initial (noisy) input using the direct sampling algorithm, as opposed to GAN where mapping a noisy image to its input is not straightforward.

The rest of the paper is organized as follow. In Sec. II we present our model, training algorithm and direct sampling method. In Sec. III we apply our model to three datasets: Bars-and-stripes for a proof-of-principle demonstration, random binary patterns for capacity illustration and the MNIST handwritten digits for showing the generalization ability of the MPS model in unsupervised tasks such as reconstructionof images. Finally, Sec IV discusses future prospects of the generative modeling using more general tensor networks and quantum circuits.

Ii MPS for Unsupervised Learning

The goal of unsupervised generative modeling is to model the joint probability distribution of given data. With the trained model, one can then generate new samples from the learned probability distribution. Generative modeling finds wide applications such as dimensional reduction, feature detection, clustering and recommendation systems Goodfellow et al. (2016). In this paper, we consider a data set consisting of binary strings

, which are potentially repeated and can be mapped to basis vectors of a Hilbert space of dimension


The probabilistic interpretation of quantum mechanics Born (1926) naturally suggests modeling data distribution with a quantum state. Suppose we encode the probability distribution into a quantum wavefunction , measurement will collapse it and generate a result with a probability proportional to Inspired by the generative aspects of quantum mechanics, we represent the model probability distribution by


where is the normalization factor. We also refer it as the partition function

to draw an analogy with the energy based models 

LeCun et al. (2006). In general the wavefunction can be complex valued, but in this work we restrict it to be real valued. Representing probability density using square of a function was also put forward by former works  Stoudenmire and Schwab (2016); Zhao and Jaeger (2010); Bailly (2011). These approaches ensure the positivity of probability and naturally admit a quantum mechanical interpretation.

ii.1 Matrix Product States

Quantum physicists and chemists have developed many efficient classical representations of quantum wavefunctions. A number of these developed representations and algorithms can be adopted for efficient probabilistic modeling. Here, we parametrize the wave function using MPS:


where each is a by matrix, and is demanded to close the trace. For the the case considered here, there are parameters on the right-hand-side of Eq. (2). The representational power of MPS is related to Von Neumann entanglement entropy of the quantum state, which is defined as . Here we divide the variables into two groups and is the reduced density matrix of a subsystem. The entanglement entropy sets a lower bound for the bond dimension at the division . Any probability distribution of a -bit system can be described by an MPS as long as its bond dimensions are free from any restriction. The inductive bias using MPS with limited bond dimensions comes from dropping off the minor components of entanglement spectrum. Therefore as the bond dimension increases, an MPS enhances its ability of parameterizing complicated functions. See Schollwöck (2011) and Orús (2014) for recent reviews on MPS and its applications on quantum many-body systems.

In practice, it is convenient to use MPS with and consequently reduce the left and right most matrices to vectors White (1992). In this case, Eq. (2) reads schematically


Here the blocks denote the tensors and the connected lines indicate tensor contraction over virtual indices. The dangling vertical bonds denote physical indices. We refer to Schollwöck (2011); Orús (2014) for an introduction to these graphical notations of TN. Henceforth, we shall present formulae with more intuitive graphical notations wherever possible.

The MPS representation has gauge degrees of freedom, which allows one to restrict the tensors with canonical conditions. We remark that in our setting of generative modeling, the canonical form significantly benefits computing the

exact partition function . More details about the canonical condition and the calculation of can be found in Appendix A.

ii.2 Learning MPS from Data

Once the MPS form of wavefunction is chosen, learning can be achieved by adjusting parameters of the wave function such that the distribution represented by Born’s rule Eq. (1) is as close as possible to the data distribution. A standard learning method is called

Maximum Likelihood Estimation

which defines a (negative) log-likelihood function and optimizes it by adjusting the parameters of the model. In our case, the negative log-likelihood (NLL) is defined as


where denotes the size of the training set. Minimizing the NLL reduces the dissimilarity between the model probability distribution and the empirical distribution defined by the training set. It is well-known that minimizing

is equivalent to minimizing the Kullback-Leibler divergence between the two distributions 

Kullback and Leibler (1951).

Armed with canonical form, we are able to differentiate the negative log-likelihood (4) with respect to the components of an order- tensor , which is obtained by contracting two adjacent tensors and . The gradient reads


where denotes the derivative of the MPS with respect to the tensor element of , and . Note that although and involve summations over an exponentially large number of terms, they are tractable in the MPS model via efficient contraction schemes Schollwöck (2011). In particular, if the MPS is in the mixed-canonical form Schollwöck (2011), can be significantly simplified to . The calculation of the gradient, as well as variant techniques in gradient descent such as stochastic gradient descent (SGD) and adaptive learning rate, are detailed in Appendix B. After gradient descent, the merged order-4 tensor is decomposed into two order-3 tensors, and then the procedure is repeated for each pair of adjacent tensors.

The derived algorithm is quite similar to the celebrated DMRG method with two-site update, which allows us to adjust dynamically the bond dimensions during the optimization and to allocate computational resources to the important bonds which represent essential features of data. However we emphasize that there are key differences between our algorithm and DMRG:

  • The loss function of classic DMRG method is usually the energy, while our loss function, the averaged NLL (

    4), is a function of data.

  • With a huge amount of data, the landscape of the loss function is typically very complicated so that modern optimizers developed in the machine learning community, such as stochastic gradient descent and learning rate adapting techniques Kingma and Ba (2014), are important to our algorithm. Since the ultimate goal of learning is optimizing the performance on the test data, we do not really need to find the optimal parameters minimizing the loss on the training data. One usually stops training before reaching the actual minima to prevent overfitting.

  • Our algorithm is data-oriented. It is straightforward to parallelize over the samples since the operations applied to them are identical and independent. In fact, it is a common practice in modern deep learning framework to parallelize over this so-called ”batch” dimension 

    Goodfellow et al. (2016). As a concrete example, the GPU implementation of our algorithm is at least times faster than the CPU implementation on the full MNIST dataset.

ii.3 Generative Sampling

After training, samples can be generated independently according to Eq. (1

). In other popular generative models, especially the energy based model such as restricted Boltzmann machine (RBM) 

Smolensky (1986), generating new samples is often accomplished by running MCMC from an initial configuration, due to the intractability of the partition function. In our model, one convenience is that the partition function can be exactly computed with complexity linear in system size. Our model enjoys a direct sampling method which generates a sample bit by bit from one end of the MPS to the other Ferris and Vidal (2012). The detailed generating process is as follow:

It starts from one end, say the -th bit. One directly samples this bit from the marginal probability . It is clear that this can be easily performed if we have gauged all the tensors except to be left-canonical because , where we define and the normalization factor reads . Given the value of the -th bit, one can then move on to sample the -th bit. More generally, given the bit values , the -th bit is sampled according to the conditional probability


As a result of the canonical condition, the marginal probability can be simply expressed as


has been settled since the -th bit is sampled. Schematically, its squared norm reads


Multiplying the matrix from the left, and calculating the squared norm of the resulting vector , one obtains


Combining (7, 9) one can compute the conditional probability (6) and sample the bit accordingly. In this way, all the bit values are successively drawn from the conditional probabilities given all the bits on the right. This procedure gives a sample strictly obeying the probability distribution of the MPS.

This sampling approach is not limited to generating samples from scratch in a sequential order. It is also capable of inference tasks when part of the bits are given. In that case, the canonicalization trick may not help greatly if there is a segment of unknown bits sitting between given bits. Nevertheless, the marginal probabilities are still tractable because one can also contract ladder-shaped TN efficiently Schollwöck (2011); Orús (2014). As what will be shown in Sec. III, given these flexibilities of the sampling approach, MPS-based probabilistic modeling can be applied to image reconstruction and denoising.

ii.4 Features of the model and algorithms

We highlight several salient features of the MPS generative model and compare it to other popular generative models. Most significantly, MPS has an explicit tractable probability density, while still allows efficient learning and inference. For a system sized , with prescribed maximal bond dimension , the complexity of training on a dataset of size is . The scaling of generative sampling from a canonical MPS is if all the bits to be sampled are connected to the boundaries, otherwise given some segments the conditional sampling scales as .

ii.4.1 Theoretical Understanding of the Expressive Power

The expressibility of MPS was intensively studied in the context of quantum physics. The bond dimensions of MPS put an upper bound on its ability of capturing entanglement entropy. These solid theoretical understandings of the representational power of MPS Schollwöck (2011); Orús (2014) makes it an appealing model for generative tasks.

Considering the success of MPS for quantum systems, we expect a polynomial scaling of the computational resources for datasets with short-range correlations. Treating dataset of two dimensional images using MPS is analogous to the application of DMRG to two dimensional quantum systems Stoudenmire and White (2012). Although in principle an exact representation of the image dataset may require exponentially large bond dimensions as the image resolution increases, at computationally affordable bond dimensions the MPS may already serve as a good approximation which captures dominant features of the distribution.

ii.4.2 Adaptive Adjustment of Expressibility

Performing optimizations for the two-site tensor instead of for each tensor individually, allows one to dynamically adjust the bond dimensions during the learning process. Since for realistic datasets the required bond dimensions are likely to be inhomogeneous, adjusting them dynamically allocates computational resources in an optimal manner. This situation will be illustrated clearly using the MNIST data set in Sec. III.3, and in Fig. 4.

Adjustment of the bond dimensions follows the distribution of singular values in (

16), which is related to the low entanglement inductive bias of the MPS representation. Adaptive adjustment of MPS is advantageous compared to most other generative models. Because in most cases, the architecture (which is the main limiting factor of the expressibility of the model) is fixed during the learning procedure, only the parameters are tuned. By adaptively tuning the bond dimensions, the representational power of MPS can grow as it gets more acquainted with the training data. In this sense, adaptive adjustment of expressibility is analogous to the structural learning of probabilistic graphical models, which is, however, a challenging task due to discreteness of the structural information.

ii.4.3 Efficient Computation of Exact Gradients and Log-likelihood

Another advantage of MPS compared to the standard energy based model is that training can be done with high efficiency. The two terms contributing to the gradient (5) are analogous to the negative and positive phases in the training of energy based models LeCun et al. (2006), where the visible variables are unclamped and clamped respectively. In the energy based models, such as RBM, typical evaluation of the first term requires approximated MCMC sampling  Hinton (2012), or sophisticated mean-field approximations e.g. Thouless-Anderson-Palmer equations Gabrie et al. (2015). Fortunately, the normalization factor and its gradient can be calculated exactly and straightforwardly for MPS. The exact evaluation of gradients guarantees the associated stochastic gradient descent unbiased.

In addition to efficiency in computing gradients, the unbiased estimate of the log-likelihood and its gradients benefits significantly when compared with classic generative models such as RBM, where the gradients are approximated due to the intractability of partition function. First, with MPS we can optimize the NLL directly, while with RBM the approximate algorithms such as Contrastive Divergence (CD) is essentially optimizing a loss function other than NLL. This results in a fact that some region of configuration space could never be considered during training RBM and a subsequently poor performance on e.g. denoising and reconstruction. Second, with MPS we can monitor the training process easily using exact NLL instead of other quantities such as reconstruction error or pseudo-lilelihood for RBM, which introduce bias to monitoring 

Hyvärinen (2006).

ii.4.4 Efficient Direct Sampling

The approach introduced in Sec. II.3 allows direct sampling from the learned probability distribution. This completely avoids the slowing mixing problem in the MCMC sampling of energy based models. MCMC randomly flip the bits and compare the probability ratios for accepting and rejecting the samples. However, the random walks in the state space can get stuck in a local minimum, which may bring unexpected fluctuations of long time correlation to the samples. Sometimes this raises issues to the samplings. As a concrete example, consider the case where all training samples are exactly memorized by both MPS and RBM. This is to say that NLL of both models are exactly , and only training samples have finite probability in both models. While other samples, even with only one bit different, have zero probability. It is easy to check that our MPS model can generate samples which is identical to one of the training samples using approach introduced in Sec. II.3. However, RBM will not work at all in generating samples, as there is no direction that MCMC could follow for increasing the probability of samplings.

It is known that when graphical models have an appropriate structure (such as a chain or a tree), the inference can be done efficiently Wainwright and Jordan (2008); Koller and Friedman (2009), while these structural constraints also limit the application of graphical models with intractable partition functions. The MPS model, however, enjoys both the advantages of efficient direct sampling and a tractable partition function. The sampling algorithm is formally similar to the ones of autoregressive models Uria et al. (2016); Oord et al. (2016) though, being able to dynamically adjust its expressibility makes the MPS a more flexible generative model.

Unlike GAN Goodfellow et al. (2014) or VAE Kingma and Welling (2013), the MPS can explicitly gives tractable probability, which may enable more unsupervised learning tasks. Moreover, the sampling in MPS works with arbitrary prior information of samples, such as fixed bits, which supports applications like image reconstruction and denoising. We note that this offers an advantage over the popular GAN, which easily maps a random vector in the latent space to the image space, but having difficulties in the reverse direction — mapping a vector in the images space to the latent space as a prior information to sampling.

Iii Applications

In this section, to demonstrate the ability and features of the MPS generative modeling, we apply it to several standard datasets. As a proof of principle, we first apply our method to the toy data set of Bars and Stripes, where some properties of our model can be characterized analytically. Then we train MPS as an associative memory to learn random binary patterns to study properties such as capacity and length dependences. Finally we test our model on the Modified National Institute of Standards and Technology database (MNIST) to illustrate its generalization ability for generating and reconstructing images of handwritten digits. 111The code of these experiments have been posted at

iii.1 Bars and Stripes

Bars and Stripes (BS) MacKay (2003) is a data set containing binary images. Each image has either four-pixel-length vertical bars or horizontal stripes, but not both. In total there are different images in the dataset out of all possible ones, as shown in Fig. (a)a. These images appear with equal probability in the dataset.This toy problem allows a detailed analysis and reveals key characteristics of the MPS probabilistic model.

Figure 1: (a) The Bars and Stripes dataset. (b) Ordering of the pixels when transforming the image into one dimensional vector. The numbers between pixels indicate the bond dimensions of the well-trained MPS.

To use MPS for modeling, we unfold the images into one dimensional vectors as shown in Fig. (b)b. After being trained over loops of batch gradient descent training the cost function converges to its minimum value, which equals to the Shannon entropy of the BS dataset , within an accuracy of

. Here what the MPS has accomplished is memorizing the thirty images rigidly, by increasing the probability of the instances appeared in the dataset, and suppressing the probability of not-shown instances towards zero. We have checked that the result is insensitive to the choice of hyperparameters.

The bond dimensions of the learned MPS have been annotated in Fig. (b)b. It is clear that part of the symmetry of the data set has been preserved. For instance, the 180° rotation around the center or the transposition of the second and the third rows would change neither the data set nor the bond dimension distribution. Open boundary condition results in the decrease of bond dimensions at both ends. In fact when conducting SVD at bond , there are at most non-zero singular values because the two parts linked by bond have their Hilbert spaces of dimension . In addition, the turnings bonds have slightly smaller bond dimension () than others inside the second row and the third row, which can be explained qualitatively as these bonds carrying less entanglement than the bonds in the bulk.

One can directly write down the exact “quantum wave function” of the BS dataset, which has finite and uniform amplitudes for the training images and zero amplitude for other images. For division on each bond, one can construct the reduced density matrix whose eigenvalues are the square of the singular values. Analyzed in this way, it is confirmed that the trained MPS achieves the minimal number of required bond dimension to exactly describe the BS dataset.

We have generated independent samples from the learned MPS. All these samples are training images shown in Fig. (a)a. Carrying out likelihood ratio test Wilks (1938), we got the log-likelihood ratio statistic , equivalently . The reason for adopting this statistic is that it is asymptotically -distributed Wilks (1938). The -value of this test is

, which indicates a high probability that the uniform distribution holds true for the sampling outcomes.

Note that quantifies the deviation from the expected distribution to the sampling outcomes, so it reflects the performance of sampling method rather than merely the training performance. In contrast to our model, for energy based models one typically has to resort to MCMC method for sampling new patterns. It suffers from slow mixing problem since various patterns in the BS dataset differs substantially and it requires many MCMC steps to obtain one independent pattern.

iii.2 Random patterns

Capacity represents how much about data could be learned by the model. Usually it is evaluated using randomly generated patterns as data. For the classic Hopfield model Hopfield (1982) with pairwise interactions given by Hebb’s rule among variables, it has been shown Amit et al. (1985) that in the low-temperature region at the thermodynamic limit there is the retrieval phase where at most

random binary patterns could be remembered. In this sense, each sample generated by the model has a large overlap with one of the training pattern. If the number of patterns in the Hopfield model is larger than

, the model would enter the spin glass state where samples generated by the model are not correlated with any training pattern.

Thanks to the tractable evaluation of the partition function in MPS, we are able to evaluate exactly the likelihood of every training pattern. Thus the capability of the model can be easily characterized by the mean negative log-likelihood . In this section we focus on the behavior of with varying number of training samples and varying system sizes.

In Fig. (a)a we plot as a function of number of patterns used for training for several maximal bond dimension . The figure shows that we obtain for training set no larger than . As what has been shown in the previous section, this means that all training patterns are remembered exactly. As the number of training patterns increases, MPS with a fixed will eventually fail in remembering exactly all the training patterns, resulting to . In this regime generations of the model usually deviate from training patterns (as illustrated in Fig. 3 on the MNIST dataset). We notice that with increasing, the curves in the figure deviate from continuously. We note this is very different from the Hopfield model where the overlap between the generation and training samples changes abruptly due to the first order transition from the retrieval phase to spin glass phase.

Figure 2: NLL averaged as a function of: (a) number of random patterns used for training, with system size . (b) system size , trained using random patterns. In both (a) and (b), different symbols correspond to different values of maximal bond dimension . Each data point is averaged over random instances (i.e. sets of random patterns), error bars are also plotted, although they are much smaller than symbol size. The black dashed lines in figures denote .

Fig. (a)a also shows that a larger enables MPS to remember exactly more patterns, and produce smaller with the number of patterns fixed. This is quite natural because enlarging amounts to the increase of parameter number of the model, hence enhances the capacity of the model. In principle if our model has infinite capacity, since arbitrary quantum states can be decomposed into MPS Schollwöck (2011). Clearly this is an advantage of our model over the Hopfield model and inverse Ising model Roudi et al. (2009), whose maximal model capacity is proportional to system size.

Careful readers may complain that the inverse Ising model is not the correct model to compare with, because its variation with hidden variables, i.e. Boltzmann machines, do have infinite representation power. Indeed, increasing the bond dimensions in MPS has similar effects to increasing the number of hidden variables in other generative models.

In Fig. (b)b we plot as a function of system size , trained on random patterns. As shown in the figure that with fixed increases linearly with system size

, which indicates that our model gives worse memory capability with a larger system size. This is due to the fact that keeping joint-distribution of variables becomes more and more difficult for MPS when the number of variables increases, especially for long-range correlated data. This is a drawback of our model when compared with fully pairwise-connected models such as the inverse Ising model, which is able to capture long-distance correlations of the training data easily. Fortunately Fig. 

(b)b also shows that the decay of memory capability with system size can be compensated by increasing .

iii.3 MNIST dataset of handwritten digits

Figure 3: NLL averaged of a MPS trained using MNIST images of size , with varying maximum bond dimension . The horizontal dashed line indicates the Shannon entropy of the training set , which is also the minimal value of . The inset images are generated by the MPS’ trained with different (annoted by the arrows).

In this subsection we perform experiments on the MNIST dataset LeCun et al. (1998)

. In preparation we turn the grayscale images into binary numbers by threshold binarization and flattened the images row by row into a vector. For the purpose of unsupervised generative modeling we do not need the labels of the digits. Here we further test the capacity of the MPS for this larger-scale and more meaningful dataset. Then we investigate its generalization ability via examining its performance on a separated test set, which is crucial for generative modeling.

iii.3.1 Model Capacity

Having chosen MNIST images, we train the MPS with different maximal bond dimensions , as shown in Fig. 3. As increases, the final decreases to its minimum , and the images generated become more and more clear. It is interesting that with a relatively small maximum bond dimension, e.g.

, some crucial features show up, though some of the images were not as clear as the original ones. For instance the hooks and the loops that partly resembled to “2”, “3” and “9” emerge. These clear characters of handwritten digits illustrate that the MPS has learned many “prototypes”. Similar feature-to-prototype transition in pattern recognitions could also be observed by using a many-body interaction in the Hopfield model, or equivalently using a higher-order rectified polynomial activation function in the deep neural networks 

Krotov and Hopfield (2016). It is remarkable that in our model this can be achieved by simply adjusting the maximum bond dimension of the MPS.

Next we train another model with the restriction of . The NLL on the training dataset reach , and many bonds have reached maximal dimension . Fig. 4 shows the distribution of bond dimensions. Large bond dimensions concentrated in the center of the image, where the variation of the pixels is complex. The bond dimensions around the top and bottom edge of the image remain small, because those pixels are always inactivated in the images. They carry no information and has no correlations with the remaining part of the image. Remarkably, although the pixels on the left and right edges are also white, they also have large bond dimensions because these bonds learn to mediate the correlations between the rows of the images.

Figure 4: Bond dimensions of the MPS trained with MNIST samples, constrained to . Final average NLL reaches . Each pixel in this figure corresponds to bond dimension of the right leg of the tensor associated to the identical coordinate in the original image.
(a) Generated
(b) Original
Figure 5: (a) Images generated from the same MPS as in Fig. 4. (b) Original images randomly selected from the training set.

The samples directly generated after training are shown in Fig. (a)a. We also show a few original samples from the training set in Fig. (b)b

for comparison. Although many of the generated images cannot be recognized as digits, some aspects of the result are worth mentioning. Firstly, the MPS learned to leave margins blank, which is the most obvious common feature in MNIST database. Secondly, the activated pixels compose pen strokes that can be extracted from the digits. Finally, a few of the samples could already be recognized as digits. Unlike the discriminative learning task carried out in

Stoudenmire and Schwab (2016), it seems we need to use much larger bond dimensions to achieve a good performance in the unsupervised task. We postulate the reason to be that in the classification task, local features of an image are sufficient for predicting the label.Thus MPS is not required to remember longer-range correlation between pixels. For generative modeling, however, it is necessary because learning the joint distribution from the data consists of (but not limited to) learning two-point correlations between pairs of variables that could be far from each other.

(a) column reconstruction on training images
(b) row reconstruction on training images
(c) column reconstruction on test images
(d) row reconstruction on test images
Figure 6: Image reconstruction from partial images by direct sampling with the same MPS as in Fig. 4. (a,b) Restoration of images in Fig. (b)b which are selected from the training set . (c,d) Reconstruction of images chosen from the test set. The test set contains images from the MNIST database that were not used for training. The given parts are in black and the reconstructed parts are in yellow. The reconstructed parts are: columns from either (a,c) the left or the right, and (b,d) the top or the bottom.

With the MPS restricted to and trained with , we carry out image restoration experiments. As shown in Fig. 6 we remove part of the images in Fig. (b)b and then reconstruct the removed pixels (in yellow) using conditional direct sampling. For column reconstruction, its performance is remarkable. The reconstructed images in Fig. (a)a are almost identical to the original ones in Fig. (b)b. On the other hand, for row reconstruction in Fig. (b)b, it makes interesting but reasonable deviations. For instance, the rightmost in the first row, an “1” has been bent to “7”.

iii.3.2 Generalization Ability

In a glimpse of its generalization ability, we also tried reconstructing MNIST images other than the training images, as shown in Fig. (c)c, (d)d. These results indicate that the MPS has learned crucial features of the dataset, rather than merely memorizing the training instances. In fact, even as early as only loops trained, the MPS could perform column reconstruction with similar image quality, but its row reconstruction performance was much worse than that trained over loops. It is reflected that the MPS has learned about short range patterns within each row earlier than those with long range correlations between different rows, since the images have been flattened into a one dimensional vector row by row.

To further illustrate our model’s generalization ability, in Fig. 7 we plotted for the same test images after training on different numbers of images. To save computing time we worked on rescaled images of size . The rescaling has also been adopted by past works, and it is shown that the classification on the rescaled images is still comparable with those obtained using other popular methods Stoudenmire and Schwab (2016).

For different , for training images always decrease monotonically to different minima, and with a fixed it is easier for the MPS to fit fewer training images. The for test images, however, behaves quite differently: for , test decreases to about then starts climbing quickly, while for the test decreases to then increases slowly to . For , test kept decreasing in loops. The behavior shown in Fig. 7 is quite typical in machine learning problems. When training data is not enough, the model quickly overfits the training data, giving worse and worse generalization to the unseen test data. An extreme example is that if our model is able to decrease training to , i.e. completely overfits the training data, then all other images, even the images with only one pixel different from one of the training images, have zero probability in the model hence . We also observe that the best test NLL decreases as training set volume enlarges, which means the tendency of memorizing is constrained and that of generalization is enhanced.

The histograms of log-likelihoods for all training and test images are shown in Fig. 8. Notice that if the model just memorized some of the images and ignored the others, the histograms would be bi-modal. It is not the case, as shown in the figure, where all distributions are centered around. This indicates that the model learns all images well rather than concentrates on some images while completely ignoring the others. In the bottom panel we show the detailed histogram by categories. For some digits, such as “1” and “9”, the difference between training and test log-likelihood distribution is insignificant, which suggests that the model has particularly great generalization ability to these images.

Figure 7: Evolution of the average negative log-likelihood for both training images (blue, bottom lines) and test images (red, top lines) during training. From left to right, number of images in the training set are , and respectively.
Figure 8: (Top) Distribution of of training images and test images given by a trained MPS with . The training negative log likelihood , and the test . (Bottom) Distributions for each digit.

Iv Summary and Outlook

We have presented a tensor-network-based unsupervised model, which aims at modeling the probability distribution of samples in given unlabeled data. The probabilistic model is structured as a matrix product state, which brings several advantages as discussed in Sec. II.4, such as adaptive and efficient learning and direct sampling.

Since we use square of the TN states to represent probability, the sign is redundant for probabilistic modeling besides the gauge freedom of MPS. It is likely that during the optimization MPS develops different signs for different configurations. The sign variation may unnecessarily increase the entanglement in MPS and therefore the bond dimensions Zhang et al. (2011). However, restricting the sign of MPS may also impair the expressibility of the model. One probable approach to obtain a low entanglement representation is adding a penalty term in the target function, for instance, a term proportional to Rényi entanglement entropy as in our further work on quantum tomography Zhao-Yu et al. (2017). In light of these discussions, we would like point to future research on the differences and connections of MPS with non-negative matrix entries Temme and Verstraete (2010)

and the probabilistic graphical models such as the hidden Markov model.

Binary data modeling links closely to quantum many-body systems with spin- constituents and could be straightforwardly generalized for higher dimensional data. One can also follow Stoudenmire and Schwab (2016); Stoudenmire (2018) to use a local feature map to lift continuous variables to a spinor space for continuous data modeling. The ability and efficiency of this approach may also depend on the specific way of performing the mapping, so in terms of continuous input there are still a lot to be explored on this algorithm. Moreover, for colored images one can encode the RGB values to three physical legs of each MPS tensor.

Similar to using MPS for studying two-dimensional quantum lattice problems Stoudenmire and White (2012), modeling images with MPS faces the problem of introducing long range correlations for some neighboring pixels in two dimension. An obvious generalization of the present approach is to use more expressive TN with more complex structures. In particular, the projected entangled pair states (PEPS) Verstraete and Cirac (2004) is particularly suitable for images, because it takes care of correlation between pixels in two-dimension. Similar to the studies of quantum systems in 2D, however, this advantage of PEPS is partially compensated by the difficulty of contracting the network and the lose of convenient canonical forms. Exact contraction of a PEPS is P hard Schuch et al. (2007). Nevertheless, one can employ tensor renormalization group methods for approximated contraction of PEPS Levin and Nave (2007); Xie et al. (2009); Wang et al. (2014); Evenbly and Vidal (2015). Thus, it remains to be seen whether judicious combination of these techniques really brings a better performance to generative modeling.

In the end, we would like to remark that perhaps the most exciting feature of quantum-inspired generative models is the possibility of being implemented by quantum devices Perdomo-Ortiz et al. (2018), rather than merely being simulated in classical computers. In that way, neither the large bond dimension nor the high computational complexity of tensor contraction, would be a problem. The tensor network representation of probability may facilitate quantum generative modeling because some of the tensor network states can be prepared efficiently on a quantum computer Schwarz et al. (2012); Huggins et al. (2018).

We thank Liang-Zhu Mu, Hong-Ye Hu, Song Cheng, Jing Chen, Wei Li, Zhengzhi Sun and E. Miles Stoudenmire for inspiring discussions. We also acknowledge suggestions from anonymous reviewers. P.Z. acknowledges Swarm Club workshop on “Geometry, Complex Network and Machine Learning” sponsored by Kai Feng Foundation in 2016. L.W. is supported by the Ministry of Science and Technology of China under the Grant No. 2016YFA0300603 and National Natural Science Foundation of China under the Grant No. 11774398. J.W. is supported by National Training Program of Innovation for Undergraduates of China. P.Z. is supported by Key Research Program of Frontier Sciences,CAS,Grant No. QYZDB-SSW-SYS032 and Project 11747601 of National Natural Science Foundation of China. Part of the computation was carried out at the High Performance Computational Cluster of ITP, CAS.


  • LeCun et al. (2015) Yann LeCun, Yoshua Bengio,  and Geoffrey Hinton, “Deep learning,” Nature 521, 436–444 (2015).
  • Ackley et al. (1985) David H. Ackley, Geoffrey E. Hinton,  and Terrence J. Sejnowski, “A Learning Algorithm for Boltzmann Machines,” Cognitive Science 9, 147–169 (1985).
  • Smolensky (1986) P Smolensky, “Information processing in dynamical systems: foundations of harmony theory,” in Parallel distributed processing: explorations in the microstructure of cognition, vol. 1 (MIT Press, 1986) pp. 194–281.
  • Salakhutdinov (2015) Ruslan Salakhutdinov, “Learning deep generative models,” Annual Review of Statistics and Its Application 2, 361–385 (2015).
  • Kingma and Welling (2013) Diederik P Kingma and Max Welling, “Auto-encoding variational bayes,” arXiv:1312.6114  (2013).
  • Uria et al. (2016) Benigno Uria, Marc-Alexandre Côté, Karol Gregor, Iain Murray,  and Hugo Larochelle, “Neural autoregressive distribution estimation,” Journal of Machine Learning Research 17, 1–37 (2016).
  • Oord et al. (2016)

    Aaron Van Oord, Nal Kalchbrenner,  and Koray Kavukcuoglu, “Pixel recurrent neural networks,” in 

    Proceedings of The 33rd International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 48 (PMLR, New York, New York, USA, 2016) pp. 1747–1756.
  • Dinh et al. (2014) Laurent Dinh, David Krueger,  and Yoshua Bengio, “NICE: non-linear independent components estimation,” arXiv:1410.8516  (2014).
  • Dinh et al. (2016) Laurent Dinh, Jascha Sohl-Dickstein,  and Samy Bengio, “Density estimation using real NVP,” arXiv:1605.08803  (2016).
  • Rezende and Mohamed (2015) Danilo Rezende and Shakir Mohamed, “Variational inference with normalizing flows,” in Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 37 (PMLR, Lille, France, 2015) pp. 1530–1538.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville,  and Yoshua Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27 (Curran Associates, Inc., 2014) pp. 2672–2680.
  • Hopfield (1982) J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,”  (1982) p. 2554.
  • Kappen and de Borja Rodríguez Ortiz (1998) Hilbert J. Kappen and Francisco de Borja Rodríguez Ortiz, “Boltzmann machine learning using mean field theory and linear response correction,” in Advances in Neural Information Processing Systems 10 (MIT Press, 1998) pp. 280–286.
  • Roudi et al. (2009) Y. Roudi, J. Tyrcha,  and J. Hertz, “Ising model for neural data: Model quality and approximate methods for extracting functional connectivity,” Phys. Rev. E 79, 051915 (2009).
  • Foulkes et al. (2001) W. M. C. Foulkes, L. Mitas, R. J. Needs,  and G. Rajagopal, “Quantum monte carlo simulations of solids,” Rev. Mod. Phys. 73, 33–83 (2001).
  • Carleo and Troyer (2017) Giuseppe Carleo and Matthias Troyer, “Solving the quantum many-body problem with artificial neural networks,” Science 355, 602–606 (2017).
  • Schollwöck (2011) Ulrich Schollwöck, “The density-matrix renormalization group in the age of matrix product states,” Annals of Physics 326, 96–192 (2011).
  • Orús (2014) Román Orús, “A practical introduction to tensor networks: Matrix product states and projected entangled pair states,” Annals of Physics 349, 117 – 158 (2014).
  • Hastings (2007) M B Hastings, “An area law for one-dimensional quantum systems,” Journal of Statistical Mechanics: Theory and Experiment 2007, P08024–P08024 (2007).
  • Eisert et al. (2010) J. Eisert, M. Cramer,  and M. B. Plenio, “Colloquium: Area laws for the entanglement entropy,” Reviews of Modern Physics 82, 277–306 (2010)arXiv:0808.3773 .
  • Bengua et al. (2015) Johann A. Bengua, Ho N. Phien,  and Hoang Duong Tuan, “Optimal feature extraction and classification of tensors via matrix product state decomposition,” arXiv:1503.00516  (2015).
  • Novikov et al. (2014) Alexander Novikov, Anton Rodomanov, Anton Osokin,  and Dmitry Vetrov, “Putting MRFs on a tensor train,” International Conference on Machine Learning , 811–819 (2014).
  • Novikov et al. (2015) Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin,  and Dmitry P Vetrov, “Tensorizing neural networks,” in Advances in Neural Information Processing Systems (2015) pp. 442–450.
  • Cohen et al. (2015) Nadav Cohen, Or Sharir,  and Amnon Shashua, “On the expressive power of deep learning: A tensor analysis,” arXiv:1509.05009  (2015).
  • Cichocki et al. (2016) Andrzej Cichocki, Namgil Lee, Ivan Oseledets, Anh-Huy Phan, Qibin Zhao, Danilo P Mandic, et al., “Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions,” Foundations and Trends® in Machine Learning 9, 249–429 (2016).
  • Levine et al. (2017) Y. Levine, D. Yakira, N. Cohen,  and A. Shashua, “Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design,” arXiv:1704.01552  (2017).
  • Perez-Garcia et al. (2007) D Perez-Garcia, F Verstraete, M M Wolf,  and J I Cirac, “Matrix Product State Representations,” Quantum Info. Comput. 7, 401–430 (2007).
  • Oseledets (2011) Ivan V Oseledets, “Tensor-train decomposition,” SIAM Journal on Scientific Computing 33, 2295–2317 (2011).
  • Landau et al. (2015) Zeph Landau, Umesh Vazirani,  and Thomas Vidick, “A polynomial time algorithm for the ground state of 1d gapped local hamiltonians,” Nature Physics 11, 566 (2015).
  • White (1992) Steven R. White, “Density matrix formulation for quantum renormalization groups,” Phys. Rev. Lett. 69, 2863–2866 (1992).
  • Stoudenmire and White (2012) E.M. Stoudenmire and Steven R. White, “Studying two-dimensional systems with the density matrix renormalization group,” Annual Review of Condensed Matter Physics 3, 111–128 (2012).
  • Stoudenmire and Schwab (2016)

    E. Miles Stoudenmire and David J. Schwab, “Supervised Learning with Quantum-Inspired Tensor Networks,” 

    Advances in Neural Information Processing Systems 29, 4799 (2016)arXiv:1605.05775 .
  • Novikov et al. (2016) Alexander Novikov, Mikhail Trofimov,  and Ivan Oseledets, “Exponential machines,” arXiv:1605.05775  (2016).
  • Gallego and Orus (2017) Angel J Gallego and Roman Orus, “The physical structure of grammatical correlations: equivalences, formalizations and consequences,” arXiv:1708.01525  (2017).
  • Chen et al. (2018) Jing Chen, Song Cheng, Haidong Xie, Lei Wang,  and Tao Xiang, “Equivalence of restricted boltzmann machines and tensor network states,” Phys. Rev. B 97, 085104 (2018).
  • Ferris and Vidal (2012) Andrew J. Ferris and Guifre Vidal, “Perfect sampling with unitary tensor networks,” Phys. Rev. B 85, 165146 (2012).
  • Goodfellow et al. (2016) Ian Goodfellow, Yoshua Bengio,  and Aaron Courville, Deep Learning (MIT Press, 2016)
  • Born (1926) M. Born, “Zur Quantenmechanik der Stoßvorgänge,” Zeitschrift fur Physik 37, 863–867 (1926).
  • LeCun et al. (2006) Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato,  and F Huang, “A tutorial on energy-based learning,”  (2006), (Accessed on 2-September-2017).
  • Zhao and Jaeger (2010) Ming-Jie Zhao and Herbert Jaeger, “Norm-observable operator models,” Neural Computation 22, 1927–1959 (2010), pMID: 20141473, .
  • Bailly (2011) Raphael Bailly, “Quadratic weighted automata:spectral algorithm and likelihood maximization,” in Proceedings of the Asian Conference on Machine Learning, Proceedings of Machine Learning Research, Vol. 20, edited by Chun-Nan Hsu and Wee Sun Lee (PMLR, South Garden Hotels and Resorts, Taoyuan, Taiwain, 2011) pp. 147–163.
  • Kullback and Leibler (1951) S. Kullback and R. A. Leibler, “On Information and Sufficiency,” The Annals of Mathematical Statistics 22, 79–86 (1951).
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980  (2014).
  • Hinton (2012) Geoffrey E Hinton, “A practical guide to training restricted boltzmann machines,” in Neural networks: Tricks of the trade (Springer, 2012) pp. 599–619.
  • Gabrie et al. (2015) Marylou Gabrie, Eric W Tramel,  and Florent Krzakala, “Training restricted boltzmann machine via the thouless-anderson-palmer free energy,” in Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama,  and R. Garnett (Curran Associates, Inc., 2015) pp. 640–648.
  • Hyvärinen (2006) Aapo Hyvärinen, “Consistency of pseudolikelihood estimation of fully visible boltzmann machines,” Neural Computation 18, 2283–2292 (2006).
  • Wainwright and Jordan (2008) Martin J Wainwright and Michael I Jordan, “Graphical models, exponential families, and variational inference,” Foundations and Trends® in Machine Learning 1, 1–305 (2008).
  • Koller and Friedman (2009) Daphne Koller and Nir Friedman, Probabilistic Graphical Models, Principles and Techniques (The MIT Press, 2009).
  • (49) The code of these experiments have been posted at
  • MacKay (2003) David JC MacKay, Information theory, inference and learning algorithms (Cambridge University Press, 2003).
  • Wilks (1938) S. S. Wilks, “The large-sample distribution of the likelihood ratio for testing composite hypotheses,” Ann. Math. Statist. 9, 60–62 (1938).
  • Amit et al. (1985) D.J. Amit, H. Gutfreund,  and H. Sompolinsky, “Spin-glass models of neural networks,” Phys. Rev. A 32, 1007 (1985).
  • LeCun et al. (1998) Yann LeCun, Corinnai Cortes,  and Christopher J.C. Burges, “The mnist database of handwritten digits,”  (1998), (Accessed on 2-September-2017).
  • Krotov and Hopfield (2016) Dmitry Krotov and John J Hopfield, “Dense associative memory for pattern recognition,” in Advances in Neural Information Processing Systems (2016) pp. 1172–1180.
  • Zhang et al. (2011) Yi Zhang, Tarun Grover,  and Ashvin Vishwanath, “Entanglement entropy of critical spin liquids,” Phys. Rev. Lett. 107, 067202 (2011).
  • Zhao-Yu et al. (2017) Han Zhao-Yu, Wang Jun, Wang Song-Bo, Li Ze-Yang, Mu Liang-Zhu, Fan Heng,  and Lei Wang, “Efficient quantum tomography with fidelity estimation,” arXiv:1712.03213  (2017).
  • Temme and Verstraete (2010)

    Kristan Temme and Frank Verstraete, “Stochastic matrix product states,” 

    Phys. Rev. Lett. 104, 210502 (2010).
  • Stoudenmire (2018) Edwin Miles Stoudenmire, “Learning relevant features of data with multi-scale tensor networks,” Quantum Science and Technology  (2018).
  • Verstraete and Cirac (2004) Frank Verstraete and Juan Ignacio Cirac, “Renormalization algorithms for quantum-many body systems in two and higher dimensions,” arXiv preprint cond-mat/0407066  (2004).
  • Schuch et al. (2007) Norbert Schuch, Michael M. Wolf, Frank Verstraete,  and Juan I. Cirac, “Computational complexity of projected entangled pair states,” Phys. Rev. Lett. 98, 140506 (2007).
  • Levin and Nave (2007) Michael Levin and Cody P. Nave, “Tensor renormalization group approach to two-dimensional classical lattice models,” Phys. Rev. Lett. 99, 120601 (2007).
  • Xie et al. (2009) Z. Y. Xie, H. C. Jiang, Q. N. Chen, Z. Y. Weng,  and T. Xiang, “Second renormalization of tensor-network states,” Phys. Rev. Lett. 103, 160601 (2009).
  • Wang et al. (2014) Chuang Wang, Shao-Meng Qin,  and Hai-Jun Zhou, “Topologically invariant tensor renormalization group method for the edwards-anderson spin glasses model,” Phys. Rev. B 90, 174201 (2014).
  • Evenbly and Vidal (2015) G. Evenbly and G. Vidal, “Tensor network renormalization,” Phys. Rev. Lett. 115, 180405 (2015).
  • Perdomo-Ortiz et al. (2018) Alejandro Perdomo-Ortiz, Marcello Benedetti, John Realpe-Gómez,  and Rupak Biswas, “Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers,” Quantum Science and Technology 3, 030502 (2018).
  • Schwarz et al. (2012) Martin Schwarz, Kristan Temme,  and Frank Verstraete, “Preparing projected entangled pair states on a quantum computer,” Phys. Rev. Lett. 108, 110502 (2012).
  • Huggins et al. (2018) William Huggins, Piyush Patel, K Birgitta Whaley,  and E Miles Stoudenmire, “Towards quantum machine learning with tensor networks,” arXiv preprint arXiv:1803.11537  (2018).
  • (68) Setting for all bonds makes the bond dimension difficult to grow in the initial training phase. Since the rank of two site tensor is and the number of nonzero singular value is at most , which is likely to be truncated back to with small cutoff.

Appendix A Canonical conditions for MPS and computation of the partition function

The MPS representation has gauge degrees of freedom, which means that the state is invariant after inserting identity on each bond ( can be different on each bond). Exploiting the gauge degrees of freedom, one can bring the MPS into its canonical form: for example, the tensor is called left-canonical if it satisfies . In diagrammatic notation, the left-canonical condition reads


The right-canonical condition is defined analogously. Canonicalization of each tensor can be done locally and only involves the single tensor at consideration Schollwöck (2011); Orús (2014).

Each tensor in the MPS can be in a different canonical form. For example, given a specific site , one can conduct gauge transformation to make all the tensors on the left, , left-canonical and tensors on the right, , right-canonical, while leaving neither left-canonical nor right-canonical. This is called mixed-canonical form of the MPS Schollwöck (2011). The normalization of the MPS is particularly easy to compute in the canonical from. In the graphical notation, it reads


We note that even if the MPS is not in the canonical form, its normalization factor can be still computed efficiently if one pays attention to the order of contraction Schollwöck (2011); Orús (2014).

Appendix B DMRG-like Gradient Descent algorithm for learning

A standard way of minimization of the cost function (4) is done by performing the gradient descent algorithm on the MPS tensor elements. Crucially, our method allows dynamical adjustment of the bond dimension during the optimization, thus being able to allocate resources to the spatial regions where correlations among the physical variables are stronger.

Initially, we set the MPS with random tensors with small bond dimensions. For example, all the bond dimension are set to except those on the boundaries 222Setting for all bonds makes the bond dimension difficult to grow in the initial training phase. Since the rank of two site tensor is and the number of nonzero singular value is at most , which is likely to be truncated back to with small cutoff.. We then carry out the canonicalization procedure so that all the tensors except the rightmost one are left-canonical. Then, we sweep through the matrices back and forth to tune the elements of the tensors, i.e. the parameters of the MPS. The procedure is similar to the DMRG algorithm with two-site update where one optimizes two adjacent tensors at a time White (1992). At each step, we firstly merge two adjacent tensors into an order- tensor,


followed by adjusting its elements in order to decrease the cost function . It is straight forward to check that its gradient with respect to an element of the tensor (12) reads


where denotes the derivative of the MPS with respect to the tensor (12), and . In diagram language, they read


The direct vertical connections of and in (14) stand for Kronecker delta functions and respectively, meaning that only those input data with pattern contribute to the gradient with respect to the tensor elements . Note that although and involve summations over an exponentially large number of terms, they are tractable in MPS via efficient contraction schemes Schollwöck (2011). In particular, if the MPS is in the mixed canonical form, the computation only involves local manipulations illustrated in (15).

Next, we carry out gradient descent to update the components of the merged tensor. The update is flexible and is open to various gradient descent techniques. Firstly, stochastic gradient descent is considerable. Instead of averaging the gradient over the whole dataset, the second term of the gradient (13) can be estimated by a randomly chosen mini-batches of samples, where the size of the mini-batch plays a role of hyperparameter in the training. Secondly, on a specific contracted tensor one can conduct several steps of gradient descent. Note that although the local update of does not change its environment, the shifting of makes a difference between steps of update with learning rate and one update step with

. Thirdly, especially when several steps are conducted on each contracted tensor, the learning rate (the ratio of the update to the gradient) can be adaptively tuned by meta-algorithms that such as RMSProp and Adam 

Kingma and Ba (2014).

In practice it is observed that sometimes the gradients become very small while it is not in the vicinity of any local minimum of the landscape. In that case a plateau or a saddle point may have been encountered, and we simply increase the learning rate so that the norm of the update is a function of the dimensions of the contracted tensor.

After updating the order- tensor (12), it is decomposed by unfolding the tensor to a matrix, subsequently applying singular value decomposition (SVD), and finally unfolding obtained two matrices back to two order- tensors.


where are unitary matrices and is a diagonal matrix containing singular values on the diagonal. The number of non-vanishing singular values will generally increase compared to the original value in Eq. (12) because the MPS observes correlations in the data and try to capture them. We truncate those singular values whose ratios to the largest one are smaller than a prescribed hyperparameter cutoff , along with their corresponding row vectors and column vectors deleted in and .

If the next bond to train on is the -th bond on the right, take so that it is left-canonical, and consequently . While if the MPS is about to be trained on the -th bond, analogously will be right-canonical and . This keeps the MPS in mixed-canonical form.

The whole training process consists of many loops. In each loop the training starts from the rightmost bond (between and ) and sweeps to the leftmost , then back to the rightmost.