Learning architectures based on quantum entanglement: a simple matrix product state algorithm for image recognition

03/24/2018 ∙ by Yuhan Liu, et al. ∙ ICFO 0

It is a fundamental, but still elusive question whether methods based on quantum mechanics, in particular on quantum entanglement, can be used for classical information processing and machine learning. Even partial answer to this question would bring important insights to both fields of both machine learning and quantum mechanics. In this work, we implement simple numerical experiments, related to pattern/images classification, in which we represent the classifiers by quantum matrix product states (MPS). Classical machine learning algorithm is then applied to these quantum states. We explicitly show how quantum features (i.e., single-site and bipartite entanglement) can emerge in such represented images; entanglement characterizes here the importance of data, and this information can be practically used to improve the learning procedures. Thanks to the low demands on the dimensions and number of the unitary matrices, necessary to construct the MPS, we expect such numerical experiments could open new paths in classical machine learning, and shed at same time lights on generic quantum simulations/computations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Classical information processing mainly deals with pattern recognition and classification. The classical patterns in question may correspond to images, temporal sound sequences, finance data, and so on. During the last thirty years of developments of the quantum information science, there were many attempts to generalize classical information processing to the quantum world, for instance by proposing quantum perceptrons and quantum neural networks (e.g., see some early works

lewenstein1994quantum ; QNN1 ; QNN2 and a review QNNrev1 ), quantum finance (e.g., B07fqbook

), quantum game theory

Eisert2000 ; Neil2001 ; DLXS+2002Qgame , to name a few. More recently, there were successful proposals to use quantum mechanics to enhance learning processes by introducing quantum gates/circuits, or quantum computers DTB16MLqenhance ; DLWT17MLqenhance ; L17MLqcircuit ; MSW17MLenhance ; HGSSG17MLmera ; WLqml .

Conversely, there were various attempts to apply methods of quantum information theory to classical information processing tasks, for instance by mapping classical images to quantum mechanical states. In 2000, Hao et al. Hao00 developed a different representation technique for long DNA sequences, obtaining mathematical objects similar to many-body wave-function. In 2005 Latorre Latorre05 developed independently a mapping between bitmap images and many-body wavefunctions which has a similar philosophy, and applied quantum information techniques in order to develop an image compression algorithm. Although the compression rate was not competitive with standard JPEG, the insight provided by the mapping was of high value Le11 . A crucial insight for this work was the idea that Latorre’s mapping might be inverted, in order to obtain bitmap images out of many-body wavefunctions. In fact, in Ref. qubism developed a reverse idea, and mapped quantum many-body states to images.

Such an interdisciplinary field was recently strongly motivated, due to the exciting achievements in the so-called “quantum technologies” (see some general introductions in, e.g., Qtech00 ; Qtech01 ; Qtech1 ; Qtech2 ). Thanks to the successes in quantum simulations/computations, including the D-Wave Dwave and the quantum computers by Google and others (“Quantum Supremacy”) Google1 ; Google2 , it becomes unprecedentedly urgent and important to explore the utilizations of quantum computations to solve machine learning tasks.

Particularly, a considerable progress has been made in the field merging quantum many-body physics and quantum machine learning BWPR+17QML based on tensor network (TN) SS16TNML ; HWFWZ17MPSML ; LRWP+17MLTN ; S17MLTN ; PV17MLTNlanguage ; LYCS17EntML

. TN provides an powerful mathematical structure that can efficiently represent many-body states, operators, and quantum circuits, even though the dimension of the Hilbert (vector) space suffers an exponential growth with the size of the system

CV09TNSRev ; O14TNSRev ; O14TNadvRev ; RTPCSL17TNrev . Paradigm examples include matrix product states (MPS) O14TNSRev , projected entangled pair states VC06PEPSArxiv ; O14TNSRev , tree TN states SDV06TTN , or multi-scale entanglement renormalization ansatz V07EntRenor . Recently, TN proved its great potential in the field of machine learning, providing a natural way to build the mathematical connections between quantum physics and classical information. Among others, MPS has been utilized to supervised image recognition SS16TNML

and generative modeling to learn joint probability distribution

HWFWZ17MPSML . Tree TN that has a hierarchical structure is also used to natural language modeling PV17MLTNlanguage and image recognition LRWP+17MLTN ; S17MLTN

, which is proven to be of high efficiency. The relations between the mathematical models of machine learning, e.g., Boltzmann machine and TN states, MPS and string-bond state, and deep convolutional arithmetic circuits and quantum many-body wave functions, have been investigated

CCXWX17TNML ; GPARC17MLTN ; HM17TNML ; LYCS17EntML .

Furthermore, it is worth mentioning that (classical) machine learning techniques have been introduced to solve physical problems. For example, it has been proposed to use neural networks to learn quantum phases of matter, and detect quantum phase transitions

CM17MLphase ; BAT17MLphase ; BCMT17MLfermion ; CCMK17MLfermion ; CVK17MLphase ; ZK17MLphase ; HSS17MLphase ; W16MLphase ; BDSW+17MLphase ; TT16MLphase ; CT17MLphys ; HDW17MLphase ; PWS16MLphase ; CL17NNstate ; NDYI17MLquantum

. Different schemes of machine learning, including supervised, unsupervised, and reinforcement learning, are applied to systems of spins, bosons and fermions, combined with gradient methods, Monte Carlo, and so on.

Despite these inspiring achievements, there are several pressing challenges. One of those concerns how to practically utilize quantum features or even quantum simulations/computations to process classical data PanJW ; Seth2013 ; MLscCircuit . With the existing methods (e.g., SS16TNML ; LRWP+17MLTN ; S17MLTN ), the number of the qubits is the same as the number of pixels in an image, which is too large to be realized with the current techniques of quantum computations. Anther challenge relates to the underlying relations between the properties of classical data and those of quantum states (e.g., quantum entanglement), which are still elusive.

In this work, we implement simple numerical experiments with MPS, and show how quantum entanglement can emerge from images and be used for the learning architecture. We encode sets of images consisting of pixels of a certain shade of grey, onto the many-qubit states in a Hilbert space SS16TNML . The classifiers of the encoded images are represented as MPS’s. A training algorithm based on Multiscale Entanglement Renormalization Ansatz (MERA) MERA ; LRWP+17MLTN is then used to optimize the MPS. We show, considering the images before and after the discrete cosine transformation (DCT), that the efficiency of such classical computation is characterized by the bipartite entanglement entropy (BEE). The MPS for classifying the images after DCT possesses much smaller BEE, meaning higher efficiency, than the MPS for the images before DCT. The single-site entanglement entropy (SEE) of the trained MPS’s characterizes the importance of the local data (e.g., different pixels). This permits to discard the less important data, so that the number of the needed qubits can be largely reduced. Our simulations show that to reach the same accuracy, the number of qubits ( qubits originally) for classifying the images after DCT can be lowered about ten times compared with that for classifying before DCT. Furthermore, we propose to optimize the MPS architecture according to SEE, and achieve in this way higher computational efficiency smaller number of qubits without harming the accuracy. The reduced number of qubits (about ) is accessible to the current techniques of quantum computations.

Ii Review of matrix product state and training algorithm

Figure 1: Illustration of (a) MPS and (b) the environment tensor . The MPS covers the 2D image in a Zigzag path. The original images (either before or after DCT) will be vectorized into many-qubit states by the feature map [Eq. (1)]. satisfies the orthogonal condition, indicated by the arrows. is defined by contracting everything after taking out the tensor (blue) that is to be updated.

The basic idea is after mapping the classical data into a vector (quantum Hilbert) space, quantum states (or the quantum operator formed by these states) are trained to capture different classes of the images, in order to solve specific tasks such as classifications. Since the Hilbert space is usually exponentially large when the size of the images increases, TN (MPS in this work) are to implement the calculations efficiently by classical computers.

ii.1 Feature from data to quantum space

Such a TN machine leaning contains two key ingredients. One is the feature map S17MLTN that encodes each input image to a product state of many qubits. Each pixel (say, the -th pixel of the -th image) is transformed to a qubit given by -dimensional normalized vector as

(1)

where runs from 1 to . We take in this work, and each qubit state satisfies . Then, the -th image is mapped to a -qubit state, which is a -dimensional tensor product state ( is the number of pixels of the image). One can see that the number of qubits equals to the number of pixels in one image. Note that in the paper, we use the bold symbols to represent tensors without explicitly writing the indexes.

ii.2 Tensor network representation and training algorithm

The second key ingredient is the TN. The output of the -th image is obtained by contracting the corresponding vectors with a linear projector denoted by as . Its coefficients satisfy

(2)

is actually a map from a -dimensional to a -dimensional vector space. Here, we take as a unitary MPS (Fig. 1) whose coefficients satisfy

(3)

Note the indexes , which are often called virtual bonds, will be all summed over. The dimensions of the virtual bonds (denoted by ) determines the upper bound of the entanglement that can be carried by the MPS. The -dimensional indexes are called physical bonds, which are contracted with the encoded images . The total number of parameters in the MPS increases only linearly with , i.e. . For implementing the contraction between and the MPS, one should choose a 1D path that covers the 2D image. Here, we choose a zig-zag path shown in Fig. 2 (a), and later show that such a path can be optimized according to the SEE of the MPS.

To train the MPS, we optimize the tensors in the MPS one by one to minimize the error of the classification. To this end, the cost function to be minimized is chosen to be the simplified negative log likelihood(NLL) , with a -dimensional vector ( is the number of classes) that satisfies

(4)

We use the MERA-inspired algorithm to optimize the MPS LRWP+17MLTN , where all tensors are taken as isometries that satisfy the right orthogonal condition (for the rightmost tensor, it still satisfies this condition by considering it as a tensor). Then the MPS in Eq. (3) gives a unitary projector from a -dimension to a -dimensional vector space. The tensors in the MPS can be initialized randomly, and then are optimized one by one (from right to left, for example). The key step is to calculate the (unnormalized) environment tensor , which is defined by contracting everything after taking out the target tensor (see Fig. 1 (b) and the supplementary material for details). Then, define and use SVD as . The tensor is updated by . One can see that the new tensor still satisfies the orthogonal condition. Update all tensors in this way one by one until they converge. The code can be found on GitHub github .

Figure 2: Computation of (a) the single-site entanglement entropy and (b) bipartition entanglement entropy.

ii.3 Discrete cosine transform and motivation

In addition, we try the standard discrete cosine transformation (DCT) to transform the images in frequency space before feeding them to the MPS. The DCT is defined as

(5)

with the width/height of the images, the position of a pixel, and if , or otherwise. In our case, we have for the images in the MNIST dataset. Note .

We propose that DCT is very helpful while choosing the path of the MPS to deal with 2D images. In the frequency space, there exists a natural 1D path for this. The zig-zag path shown in Fig. 2 (a) is used in many standard image algorithms (e.g., JPEG). The frequency is non-increasing along the path. Note that in previous works using MPS, the 2D images are directly reshaped into 1D (i.e., ) images.

Moreover, it is known from the existing image algorithms that the most important information is normally stored in the low-frequency data. It is interesting to see if the entanglement of the trained MPS reveals the same property. In this way, the number of qubits can be further reduced when defining the MPS on the zig-zag path and training after DCT transformation.

Iii Learning architecture based on quantum entanglement

We will show below that by learning the images from the frequency space (reached by DCT), the computational cost can be largely reduced without lowering the accuracy. This is revealed by a lower BEE of the MPS, which means that smaller virtual bond dimensions are needed. More interestingly, we propose a learning architecture based on quantum entanglement to further improves the efficiency. The architecture contains two aspects: optimizing the MPS path according to SEE, and discarding less important data according to BEE. Our work practically utilize (bipartite and single-site) quantum entanglement to design machine learning algorithms for classical data. It exhibits an explicit example of “quantum learning architecture”. We test our proposal with MNIST dataset of handwriting digits MNIST .

iii.1 Single-site and bipartite entanglement entropy of MPS

Before presenting our results, let us define the single-site entanglement entropy (SEE) and bipartite entanglement entropy (BEE). The reduced density matrix of the -th site, for example, is defined as

(6)

Note is non-negative. The computation of with MPS is shown in Fig. 2 (b), where one contracts everything except the indexes and . The leading computational complexity is about by using the orthogonal condition. After normalizing by , the SEE is defined as

(7)

The BEE measured between, for example, the -th and

-th sites is similarly defined by the reduced density matrix obtained after tracing over either half of MPS. There is another way to obtain BEE by singular value decomposition (SVD), where BEE is given by the singular values (or called Schmidt numbers). The SVD is formally written as

(8)

where the singular values are given by the non-negative diagonal matrix , and and satisfy the orthogonal conditions . Normalizing by , BEE is defined as

(9)

The computation of BEE in our context is illustrated in Fig. 2 (c). One only needs to transform the first tensors to the left orthogonal form (indicated by the arrows), then is obtained by the SVD of as . The leading computational cost is .

Figure 3: (a) Single-site entanglement entropy (SEE) and (b) bipartite entanglement entropy (BEE) of MPS without DCT. (c) and (d) show the SEE and BEE with DCT. We take the classification between the images “0” and “2” as an example. The virtual bond dimension is , with .
Figure 4: (a) SEE in frequency space without and with path optimization according to SEE, (b) SEE in real space, (c) BEE in real space and frequency space, without and with path optimization, and (d) accuracy on the test dataset for different MPS length . The virtual bond dimension is , with and . The accuracies are also indicated. In (d), the accuracies from the real-space data suffer large fluctuations, indicated by the error bar.

In Fig. 3 (and most of the paper), we take the MPS trained for classifying images “0” and “2” as an example, and show its SEE and BEE with and without the DCT. Without DCT, the data are in the real space, i.e., simply the pixels of the 2D images. The relatively large values of SEE are distributed almost all over the 2D plane. With DCT, the data are in fact the weights of different frequencies. The large values of the SEE only appears in the positions that are close to the label bond.

SEE actually characterizes the amount of non-trivial information carried by the data. Without DCT, the important information is distributed almost all over the 2D plane. See supplementary material C for more details. With DCT, the important information to the classification problem are mainly of low frequencies. This is consistent with what is know from the well-established image algorithms, that the low-frequency data are more important. With our work, such a phenomenon is naturally justified by the values of SEE of the trained MPS.

Meanwhile, the BEE with DCT increases in a much slower way than that without DCT. Due to the orthogonal conditions of the MPS, the information flows from the right end of the MPS to the left (label bond). Each time when the non-trivial information (indicated by a relatively large SEE) is passed through, BEE increases and finally saturates to a finite value around . While approaching to the label bond on the left end, BEE decreases to , giving a triangular plateau of BEE [see Fig. 3 (b)]. This can be understood as a “refining” process: while the information flows to the label bond (output), only the the information that is important to the classification will be kept. The value of the BEE also indicates that the state of each virtual bond in the plateau is actually described by the two-qubit maximally entangled state.

In the MPS schemes, it is well-known that the BEE determines the needed dimensions of the corresponding virtual bond. Particularly, when the entanglement entropy vanishes to zero, it means the corresponding data is uncorrelated to others and need not be fed to the MPS. In the following, we will show that to reach the same accuracy, smaller length of MPS, meaning less qubits, are needed with DCT than without DCT. This provides an efficient scheme to discard the sites of small SEE.

iii.2 Learning architecture based on single-site entanglement entropy

Step 1 Randomly initialize the MPS, choose a path (say, zigzag), and train it by the standard algorithm; calculate the SEE of the MPS.
Step 2 Redefine the path according to the values of SEE at different sites.
Step 3 Define the MPS on the new path, randomly initialize it, and train it.
Step 4 Calculate the SSE: if the SSE is in an acceptable descending order, end the training; if not, go back to Step 2.
Step 5 Calculate the BEE and find the -th site where BEE equals to . Discard the data after this site () and train the new MPS with the length of .
Table 1: Steps of the training algorithm, where the architecture of the MPS is guided by the entanglement.

To minimize the BEE, we propose to rearrange the path of the MPS, so that the SEE is in a non-ascending order. The steps are listed in Table 1. After path optimization, the BEE will be lowered, meaning the computational cost will be lowered, while the accuracy remains unchanged.

1 2 3 4 5 6 7 8 9
0 0.15(0.76) 0.11(0.82) 0.13(0.81) 0.12(0.82) 0.11(0.80) 0.13(0.79) 0.13(0.80) 0.12(0.81) 0.13(0.82)
1 - 0.13(0.76) 0.15(0.79) 0.15(0.75) 0.14(0.77) 0.15(0.77) 0.15(0.76) 0.15(0.76) 0.15(0.77)
2 - - 0.11(0.82) 0.11(0.81) 0.11(0.80) 0.11(0.79) 0.11(0.81) 0.11(0.81) 0.12(0.83)
3 - - - 0.12(0.81) 0.11(0.80) 0.13(0.80) 0.13(0.81) 0.13(0.81) 0.14(0.81)
4 - - - - 0.12(0.80) 0.12(0.80) 0.13(0.80) 0.12(0.80) 0.13(0.80)
5 - - - - - 0.13(0.80) 0.13(0.80) 0.12(0.80) 0.13(0.81)
6 - - - - - - 0.13(0.80) 0.13(0.80) 0.13(0.81)
7 - - - - - - - 0.13(0.80) 0.14(0.81)
8 - - - - - - - - 0.14(0.81)
Table 2: Complexity ratios [Eq. (10)] of classifiers trained by frequency data and by the real-space data (shown in the bracket) without path optimization.

To explain how this architecture works, let us give a simple example with a three-qubit quantum state. The wave function reads , where and stand for the spin-up and spin-down states, respectively. By writing the wave function into a three-site MPS, one can easily check that the two virtual bonds are both two-dimensional. The total number of parameters of this MPS is . However, if we define the MPS after swapping the second qubit to either end of the chain, say swapping it with the third qubit, the wave function becomes . Obviously, the virtual bonds of the MPS are two- and one-dimensional, respectively, and the total number of parameters is reduced to . In our algorithm, the SSE will normally be in a good descending order after optimizing the path only once.

Fig. 4 (a) shows the SEE in the frequency space with and without path optimization. Without path optimization, the important data where the values of SSE are relatively large are distributed on the first 200 sites (see the inset of Fig. 4 (a)). By zooming in this range, one can see that the SSE are in a good descending order after optimizing the path. For comparison, we show in Fig.4 (b) the SEE of the MPS trained by the real-space data with and without optimizing the path.

Fig. 4 (c) shows the BEE, which indicates the computational cost of using the MPS to solve the classification task. It is obvious that the BEE of the MPS trained by the frequency data is much smaller than that of the MPS trained by the real-space data. By path optimization, the BEE is further reduced, indicating that smaller bond dimensions are needed.

Fig. 4 (d) shows the accuracy when discarding certain less important data. We only use the first data of each image to train the -site MPS. We observe that as increases, the accuracy trained with the frequency data rises quickly and reach the value more than with being as small as 40. For comparison, training by the real-space data obviously requires a larger number of qubits, which can be reduced significantly by optimizing the path. The reduced number is almost comparable to that with DCT. For the training after DCT, the difference between the accuracies with and without path optimization is relatively small. This is because we take , where the maximal capacity of the entanglement entropy () is much larger than the reduction of the BEE by the path optimization.

To characterize the improvement of efficiency that can be gained by discarding the less important data, we define the complexity ratio

(10)

is defined by a threshold, so that the BEE is smaller than when measured after the -th site. is a number determined by the requirement of accuracy. We take . When , it means the data on the last sites can be ignored without harming the accuracy too much. Our results show that when trained with real-space data without path optimization, and and using frequency data without and with path optimization, respectively. More results are given in Table 2. We show that the trainings by the data with and without DCT lead to similar accuracies, but the efficiencies (characterized by the complexity ratios) are largely different.

Iv Summary and prospects

In this work, we explicitly show that quantum entanglement can be used for guiding the learning of data for image recognition. By training the unitary MPS, our numerical experiments demonstrate that the bipartite entanglement entropy indicates the complexity of the tasks using classical computations. The single-site entanglement entropy characterizes the importance of the data to the classification problems, with which an optimization technique of the MPS architecture is proposed to largely improve the efficiency.

Our proposal can be readily applied to feature extraction, and to improving the efficiency of other learning schemes, such as those based on hierarchical TN’s. The exploitation of DCT implies that quantum techniques such as TN can be combined with classical computational techniques, such as neural networks, to develop novel efficient learning algorithms. Revealing the relations to theoretical physics (e.g., quantum information) would provide a solid ground for TN machine learning, avoiding being a “trial-and-error alchemy”.

From the viewpoint of quantum computation for machine learning PanJW ; Seth2013 ; MLscCircuit , there are two advantages of our proposal. Firstly, the MPS we train is formed by unitaries, which has good accuracy with relatively small bond dimensions. Note that in principle, any local unitary maps or gates can be realized in quantum simulators or computers. Secondly, our proposal permits to largely reduce the size of the MPS (meaning the numbers of both qubits and quantum gates) without harming the accuracy. This significantly lowers the complexity of quantum computations, which strongly depends on the numbers of the qubits and gates. The reduced number of qubits is only around , which is within the access of the state-of-the-art quantum computers. The low demands on the bond dimensions and, particularly, on the size, permit to simulate machine learning tasks by quantum simulations or quantum computations in the near future.

acknowledgement

S.J.R. thanks Gang Su, Lei Wang, Ding Liu, Cheng Peng, Zheng-Zhi Sun, Ivan Glasser, and Peter Wittek for stimulating discussions. Y.H.L thanks Naichao Hu for helpful suggestions of writing the manuscript. This work was supported by ERC AdG OSYRIS (ERC-2013-AdG Grant No. 339106), Spanish Ministry MINECO (National Plan 15 Grant: FISICATEAMO No. FIS2016-79508-P, SEVERO OCHOA No. SEV-2015-0522), Generalitat de Catalunya (AGAUR Grant No. 2017 SGR 1341 and CERCA/Program), Fundació Privada Cellex, EU FETPRO QUIC (H2020-FETPROACT-2014 No. 641122), the National Science Centre, and Poland-Symfonia Grant No. 2016/20/W/ST4/00314. S.J.R. was supported by Fundació Catalunya - La Pedrera . Ignacio Cirac Program Chair. Y.H.L is supported by National Program For Top-notch Undergraduate in Basic Science under Grant No. 03100-31911002 from the Ministry of Education of P.R. China. X.Z. is supported by the National Natural Science Foundation of China (No. 11404413), the Natural Science Foundation of Guangdong Province (No. 2015A030313188), and the Guangdong Science and Technology Innovation Youth Talent Program (Grant No. 2016TQ03X688).

References

Appendix A: some details of the training algorithm

We introduce several tricks to speed up the training procedure. Firstly, we evolve the environment tensors to avoid putting too many training samples in one single iteration. Specifically speaking, we only randomly select a small number of samples (say ) and compute the corresponding environment tensor . Then we update with a small constant. is the total environment tensor and can be initialized as the obtained in the first iteration. Then we use SVD of the total environment tensor to update the tensor as . We find this harms little the accuracy but can largely save the computational time and memory. Our simulation also shows high accuracy and fast convergence with . The difference between large and small is the stability under certain extreme conditions, such as training with very small bond dimensions.

Secondly, we restore all the intermediate vectors during the contraction process to avoid repetitive computations. This trades the computational time by memory, and do no harm to the accuracy.

Thirdly, we take advantage of the unitary property of the MPS. The original cost function should be the negative log-likelihood (NLL), which reads

(A1)

with the total number of images. Considering as a constant according to the orthogonal condition, one has

(A2)

with the environment tensor for the -th sample without normalization. More investigations are to be done to further understand the techniques explained above [Zheng-Zhi Sun et al, in preparation].

For the feature map, the standard one maps a pixel satisfying to a normalized vector which ranges from (spin up) to (spin down). When the feature map is fixed, the range of (with ) changes with the range of (from to a canted spin state ), and vice versa. It is obvious that . Meanwhile, we find that by controlling , accuracy can change. Without DCT, we take and , which gives relatively high precision and stability. With DCT, the signs and the maximum/minimum of the “pixels” (also denoted by ) of each image are not fixed. The accuracy and stability are the highest with and . This is because with DCT, most values are quite small, which requires a relatively large .

We shall stress that our proposal of entanglement-based architecture is independent on the algorithms or tricks for optimizing the MPS (or other TN’s). Once the algorithm is chosen, our proposal can be utilized to reveal the “quantum” features of the machine learning tasks and improve the efficiency of the training.

Appendix B: precision of the two-class classifiers on the test dataset

In Table A1, we show the accuracy on the test dataset for all the two-class classifiers trained by the frequency data. We take physical bond dimension and the virtual bond dimension . In each iteration, we feed samples randomly picked from the two classes.

1 2 3 4 5 6 7 8 9
0 0.9981 0.9896 0.9965 0.9975 0.9899 0.9897 0.9960 0.9913 0.9925
1 - 0.9949 0.9967 0.9967 0.9956 0.9962 0.9884 0.9934 0.9953
2 - - 0.9838 0.9940 0.9922 0.9940 0.9811 0.9855 0.9912
3 - - - 0.9980 0.9721 0.9995 0.9858 0.9738 0.9827
4 - - - - 0.9973 0.9871 0.9896 0.9939 0.9653
5 - - - - - 0.9881 0.9958 0.9845 0.9842
6 - - - - - - 0.9975 0.9948 0.9975
7 - - - - - - - 0.9885 0.9637
8 - - - - - - - - 0.9808
Table A1: Precision of the two-class classifiers trained by frequency data.The virtual bond dimension is , with and .

For comparison, the accuracy obtained from the real-space data is shown in Table A2. In general, the accuracy from the frequency data is generally at the save level with that from the real-space data. This is expected since the DCT gives a unitary transformation on the data.

1 2 3 4 5 6 7 8 9
0 0.9980 0.9950 0.9990 0.9970 0.9980 0.9890 0.9940 0.9960 0.9960
1 - 0.9950 0.9960 0.9990 0.9990 0.9960 0.9970 0.9980 0.9990
2 - - 0.9850 0.9980 0.9970 0.9970 0.9900 0.9950 0.9990
3 - - - 1 0.9930 1 0.9930 0.9920 0.9950
4 - - - - 1 0.9970 0.9980 0.9970 0.9940
5 - - - - - 0.9930 0.9990 0.9910 0.9890
6 - - - - - - 1 0.9980 0.9970
7 - - - - - - - 0.9940 0.9920
8 - - - - - - - - 0.9830
Table A2: Precision of the two-class classifiers trained by real-space data.The virtual bond dimension is , with and .

Appendix C: SEE for real-space MPS classifiers

For the real-space MPS classifiers, SEE can characterize the importance on different sites of the images. This can be viewed clearly from the SEE distribution viewed in 2D plane (Fig. A1). We see that the SEE distribution captures the main features of the two classes of images that are to be classified. With the path optimization, the extracted features of the images are stored not only in the SEE, but also in the path of the MPS (i.e., how the MPS covers the 2D image).

Besides, we notice that with the real-space data, SEE is zero along the edges of about 4-pixel width, corresponding to the blank edges of most of the images in the MNIST images. This serves as another proof that SEE characterizes the importance of the data provided on different sites.

(a)
(b)
(c)
(d)
(e)
(f)
Figure A1: SEE for five real-space classifiers, from left to right: , , , , and classifiers. One can see that the SEE can capture the features of the images.